Skip to content

Instantly share code, notes, and snippets.

@CJHwong
Last active March 22, 2025 10:54
Show Gist options
  • Save CJHwong/a242b07e011e2c1d003a19c9a735a3b2 to your computer and use it in GitHub Desktop.
Save CJHwong/a242b07e011e2c1d003a19c9a735a3b2 to your computer and use it in GitHub Desktop.
Gems - A Gemini Gem Manager in your MacBook powered by Gemma

Gems - A Gemini Gem Manager in your MacBook powered by Gemma

Gems is a zsh script designed to simplify interacting with local Large Language Models using Ollama. It allows you to execute LLM commands with pre-defined prompts, making it easier to perform specific tasks without writing new prompts every time.

Inspired by the workflow described in this Hacker News post by eliya_confiant: https://news.ycombinator.com/item?id=39592297.


Features

  1. Multiple Prompt Templates: Choose from prompts like EditingAssistant, ExplainCode, Summary.
  2. Model Selection: Use -m to specify an AI model (defaults to gemma3:12b).
  3. Language Detection: EditingAssistant auto-detects the language.
  4. Clipboard & Alerts: Copies output to the clipboard and shows a notification.
  5. Markdown Display: Render markdown in Terminal/iTerm2 (with glow) or Warp.
  6. Automator wth macOS: Invoke via right-click or a keyboard shortcut.

Installation

A. Running LLM from the Command Line

  1. Ollama

    brew install ollama

B. Markdown Renderer (Optional)

  1. glow

    brew install glow
    • For rendering Markdown in Terminal or iTerm2.
  2. Warp

    brew install --cask warp
    • Alternative terminal that can display Markdown.

Why Markdown Rendering is Optional

The script copies the LLM output to your clipboard by default. But this script also provides a way to render the Markdown output in your terminal. This feature is optional and requires additional setup.

Enabling Markdown Rendering

  1. Choose a Renderer: Install glow and/or Warp.
  2. Settings: In gems.sh, set RESULT_VIEWER_APP to your preferred renderer.

Setting Up via macOS Shortcuts

In addition to using an Automator Quick Action, you can leverage the modern macOS Shortcuts app to run Gems. This method provides a more integrated experience with additional options for input, notifications, and even Siri activation.

image

Steps to Create a Shortcut for Gems

  1. Open the Shortcuts App
    Launch the Shortcuts app on your Mac (available on macOS Monterey and later).

  2. Create a New Shortcut

    • Click the “+” button to create a new shortcut.
    • Give your shortcut a descriptive name (e.g., Ask AI).
  3. Add a “Run Shell Script” Action

    • In the Shortcuts editor’s search bar, type “Run Shell Script” and add the action to your workflow.

    • Paste the command that runs your gems.sh script. For example:

      /Users/yourname/scripts/gems.sh \"$@\"
    • Be sure to use the full path to your script so that Shortcuts can locate it properly.

  4. Configure Input

    • Search for “Get Text from Input” in the Shortcuts editor and insert this action above your Run Shell Script action.
    • In the Get Text from Input action, set the input source to Shortcut Input.
    • Then, modify its settings by changing the Receive Type to Text and set Input From to Quick Actions.
    • Configure this step so that if no input is provided (for example, if no text is selected), the shortcut automatically prompts you with Ask For Text.
    • Finally, go back to your Run Shell Script action and change the “Input” option by selecting the variable output from the previous step—ensuring it passes as Text to your script.
  5. Assign a Keyboard Shortcut (Optional)

    • With your shortcut open, click the Settings icon (a slider or “i” icon) and select “Add Keyboard Shortcut”.
    • Choose a convenient key combination so

Setting Up via Automator (Not Recommended)

Click to expand

Creating a Quick Action

  1. Open Automator and create a Quick Action.
  2. Add a Run Shell Script action and paste the contents of gems.sh.
  3. Save. You can now select text, right-click, and choose it from Services.

Keyboard Shortcut Setup

  1. System SettingsKeyboardKeyboard Shortcuts...
  2. Under ServicesText, locate your Quick Action.
  3. Assign a shortcut.

Usage Examples

  • Prompt directly to the AI

    ./gems.sh "Summarize these notes"
  • Specify the model and the prompt template

    ./gems.sh -m "gemma3:27b" -a ExplainCode "def example_func(x): return x*2"

NOTE

To view rendered Markdown output, ensure your chosen terminal application (e.g., Warp) is up and running to display it.

#!/bin/zsh
#==========================================================
# LLM Prompt Tool
#
# This script runs a local LLM command and applies selected
# pre-configured prompts to user input, making it easy to use LLMs
# for specific tasks without writing new prompts each time.
#==========================================================
#==========================================================
# CONFIGURATION
#==========================================================
# LLM settings
LLM_COMMAND="ollama" # Command to run LLM
LLM_ATTR="run" # Command attribute (run for Ollama)
DEFAULT_MODEL="gemma3:12b" # Default model to use if none specified
LANGUAGE_DETECTION_MODEL="gemma3" # Model used for language detection
DEFAULT_PROMPT_TEMPLATE="Passthrough" # Default prompt template if none selected
# Output settings
RESULT_VIEWER_APP="" # Application to open results: Warp, Terminal, or iTerm2
#==========================================================
# FUNCTIONS
#==========================================================
# Initialize the prompt templates with instructions
function init_prompt_templates() {
declare -gA PROMPT_TEMPLATES
# Basic templates
PROMPT_TEMPLATES["Passthrough"]=" "
PROMPT_TEMPLATES["Summary"]="Summarize the following text in a concise and clear manner. Ensure that all main points are included and that the summary is coherent and easy to understand.\nText:"
# Text improvement templates
PROMPT_TEMPLATES["EditingAssistant"]="Revise the following text for clarity, grammar, word choice, and sentence structure. Maintain a neutral tone and conversational style. Ensure that the revisions enhance readability while preserving the original meaning.\nText:"
PROMPT_TEMPLATES["Insight"]="You are an expert in various domains with a specialized task: analyzing text to uncover insights. Your responsibilities include:\n1) Summarizing the key points and arguments presented in the text.\n2) Identifying any underlying assumptions or biases that may be influencing the author's perspective.\n3) Analyzing the potential implications or consequences of the ideas presented.\n4) Offering your own insights or interpretations, considering the context, author's perspective, and the potential implications.\nPlease analyze the following text:"
# Technical templates
PROMPT_TEMPLATES["ExplainCode"]="Break down the following code snippet line by line. For each line, explain its purpose and how it contributes to the overall logic of the code. If applicable, describe any potential business applications or use cases where this code might be utilized. Additionally, reason through the code step-by-step to provide a comprehensive understanding.\nCode:"
# Business templates
PROMPT_TEMPLATES["MeetingNote"]="You are a strategic meeting analyst and facilitator, not merely a note-taker. Your task is to transform the raw meeting transcription below into a powerful document that drives future action and clarifies strategic implications. Do not simply summarize; *reorganize, synthesize, and analyze* the information to highlight key takeaways and facilitate decision-making.\nConsider the meeting participants' likely goals, unspoken assumptions, and potential blind spots. Identify the core issues *driving* the discussion, even if those issues are not explicitly stated.\nProduce a document structured as follows:\n**1. Executive Summary (Most Important - 1-3 sentences MAX):**\n * What is the *single most critical outcome or insight* from this meeting? What *must* stakeholders know immediately? This should be a compelling statement of the meeting's impact.\n * Example: *\"The meeting revealed a critical misalignment between the marketing and sales teams regarding lead qualification criteria, jeopardizing Q3 targets.\"* (Not just: \"The meeting was about marketing and sales.\")\n**2. Strategic Context & Objectives (Beyond the stated purpose):**\n * What was the *stated* purpose of the meeting?\n * What were the *underlying, potentially unstated* goals and objectives? (Infer these from the discussion.) What problem were they *really* trying to solve?\n * What broader strategic initiatives or company goals does this meeting relate to?\n * Were the objectives achieved? If not, why not? (Be concise, but offer an opinion.)\n**3. Core Issues & Disagreements (Synthesized and Prioritized):**\n * Do *not* list discussion points chronologically. Instead, identify the 2-4 *most critical, underlying issues* that drove the conversation, even if they weren't explicitly labeled as such. These should be *themes*, not topics.\n * For *each* core issue:\n * **Issue Statement:** Clearly define the problem or opportunity.\n * **Key Perspectives:** Briefly summarize the *different viewpoints* or arguments presented, highlighting areas of agreement and *disagreement*. Name participants if their perspective is particularly significant.\n * **Implications:** What are the potential *consequences* of this issue (positive or negative) if left unaddressed?\n * **Open Questions:** Identify any important unanswered questions or areas needing further investigation related to this issue.\n**4. Decisions & Commitments (Clarified and Confirmed):**\n * List any *explicit* decisions made. Be very precise about the wording of the decision.\n * List any *implicit* decisions or commitments made (things people agreed to do, even if not formally stated as a "decision"). Phrase these as clear commitments.\n * For *each* decision/commitment:\n * **What:** State the decision/commitment clearly.\n * **Who:** Identify the responsible party (individual or team).\n * **When:** Specify any deadlines or timelines (explicit or implied).\n * **How will success be measured:** If not obvious or discussed, suggest a metric.\n**5. Action Items & Next Steps (Prioritized and Actionable):**\n * Create a prioritized list of *specific, measurable, achievable, relevant, and time-bound (SMART)* action items. Do not just list tasks mentioned; rephrase them to be actionable.\n * For *each* action item:\n * **Action:** Describe the action concisely and clearly. Use verbs that indicate clear action (e.g., "Develop," "Research," "Schedule," "Present").\n * **Owner:** Assign a single, directly responsible individual (not a team).\n * **Deadline:** Specify a clear due date.\n * **Dependencies:** Note any other actions or decisions this item depends on.\n * **Priority:** High/Medium/Low (or a numerical scale if appropriate)\n * **Status:** (Leave this blank for the initial output, but it's a good practice to include for future updates.)\n**6. Risks, Roadblocks, & Open Questions (Proactive Issue Identification):**\n * Identify any *potential risks* or roadblocks that could hinder progress on action items or the overall objectives.\n * List any significant *unanswered questions* that emerged from the meeting and require further investigation. These should be questions that, if answered, would significantly impact decisions or strategy.\n**7. Key Insights & Recommendations (Your Expert Analysis):**\n * Based on your analysis, offer 2-3 *key insights* that go beyond the surface-level discussion. What did you learn from analyzing this meeting that attendees might have missed?\n * Provide 2-3 concrete *recommendations* for next steps, even if those steps weren't explicitly discussed in the meeting. These should be strategic recommendations, not just tactical tasks. Consider what *should* happen next to maximize the value of this meeting.\n**Transcription:**"
# Add new prompt templates below this line
# Example format:
# PROMPT_TEMPLATES["TemplateName"]="Your Prompt Template"
}
# Get available models from ollama
function get_available_models() {
# Check if ollama command exists
if ! command -v "$LLM_COMMAND" &> /dev/null; then
echo "Error: '$LLM_COMMAND' is not installed or not in PATH."
return 1
fi
# Run ollama ls and extract the model names (first column), skipping the header row
local models
models=$($LLM_COMMAND ls 2>/dev/null | awk 'NR>1 {print $1}' | sort)
echo "$models"
}
# Display usage information
function show_help() {
echo "Usage: $0 [-m model] [-t template] [-v] [text]"
echo "Options:"
echo " -m <model> Specify LLM model (default: $DEFAULT_MODEL)"
echo " -t <template> Specify prompt template to use"
echo " -v Verbose mode (show debug information)"
echo " -h Display this help message"
echo ""
echo "Available prompt templates:"
for template_name in ${(k)PROMPT_TEMPLATES}; do
echo " - $template_name"
done
echo ""
echo "Available models:"
local available_models
available_models=$(get_available_models)
if [ $? -eq 0 ] && [ -n "$available_models" ]; then
echo "$available_models" | while read -r model; do
echo " - $model"
done
else
echo " Unable to retrieve model list. Check if ollama is installed and running."
fi
exit 0
}
# Parse command line arguments
function parse_arguments() {
while getopts ":m:t:vh" opt; do
case $opt in
m) SELECTED_MODEL="$OPTARG" ;;
t) SELECTED_TEMPLATE="$OPTARG" ;;
v) VERBOSE_MODE=true ;;
h) show_help ;;
\?) echo "Invalid option: -$OPTARG" >&2; exit 1 ;;
esac
done
# Set default model if not specified
if [ -z "$SELECTED_MODEL" ]; then
SELECTED_MODEL="$DEFAULT_MODEL"
fi
# Set default for verbose mode if not specified
if [ -z "$VERBOSE_MODE" ]; then
VERBOSE_MODE=false
fi
# Shift past the processed options to get user input
shift $((OPTIND-1))
USER_INPUT=$@
# Check if input is empty
if [ -z "$USER_INPUT" ]; then
echo "Error: No input provided. Please provide text to process."
echo "Use -h for help information."
exit 1
fi
}
# Log message if in verbose mode
function log_verbose() {
if [ "$VERBOSE_MODE" = true ]; then
echo "[DEBUG] $1"
fi
}
# Select prompt template using GUI if not provided via command line
function select_prompt_template() {
# Build comma-separated list of prompt templates
available_templates=""
for template_name in ${(k)PROMPT_TEMPLATES}; do
if [[ $available_templates == "" ]]; then
available_templates="$template_name"
else
available_templates="$available_templates, $template_name"
fi
done
# Prompt user to select template if not provided via command line
if [ -z "$SELECTED_TEMPLATE" ]; then
SELECTED_TEMPLATE=$(osascript -e "choose from list {$available_templates} with prompt \"Select a prompt template to use:\" default items {\"$DEFAULT_PROMPT_TEMPLATE\"}")
if [ "$SELECTED_TEMPLATE" = "false" ]; then
echo "No template selected. Operation cancelled."
exit 0
fi
fi
}
# Verify that all required dependencies are installed and accessible
function verify_dependencies() {
if ! command -v "$LLM_COMMAND" &> /dev/null; then
echo "Error: '$LLM_COMMAND' is not installed or not in PATH."
echo "Please install $LLM_COMMAND: https://ollama.com/download"
exit 1
fi
}
# Identify the language of input text
function detect_language() {
local input_text="$1"
local model="$2"
local detection_prompt="You are a language identification specialist. Your only task is to determine the language of the provided text. Identify the language of this text. Respond with only the language name (e.g., 'English', 'Spanish', 'Japanese'): $input_text"
# Run language detection
local detected_language
detected_language=$($LLM_COMMAND $LLM_ATTR "$model" "$detection_prompt" | head -n 1)
echo "$detected_language"
}
# Process user input with selected template
function process_with_template() {
# Get system prompt from template
local system_prompt="${PROMPT_TEMPLATES[\"$SELECTED_TEMPLATE\"]}"
local language_system_prompt=""
local final_prompt=""
local response=""
# Add language detection for EditingAssistant template
if [[ "$SELECTED_TEMPLATE" == "EditingAssistant" ]]; then
log_verbose "Detecting input language..."
local detected_language
detected_language=$(detect_language "$USER_INPUT" "$LANGUAGE_DETECTION_MODEL")
language_system_prompt="Translate your output into $detected_language"
log_verbose "Language detected: $detected_language"
fi
# Show debug information in verbose mode
if [[ "$VERBOSE_MODE" == true ]]; then
log_verbose "System prompt: $system_prompt"
[[ -n "$language_system_prompt" ]] && log_verbose "Language instruction: $language_system_prompt"
log_verbose "User input: $USER_INPUT"
fi
# Construct final prompt
final_prompt="$system_prompt $USER_INPUT"
[[ -n "$language_system_prompt" ]] && final_prompt="$final_prompt\n$language_system_prompt"
# Execute LLM command
response=$($LLM_COMMAND $LLM_ATTR $SELECTED_MODEL "$final_prompt")
local exit_code=$?
# Handle errors
if [[ $exit_code -ne 0 ]]; then
echo "Error: LLM command failed with code $exit_code"
exit $exit_code
fi
if [[ -z "$response" ]]; then
echo "Error: No response received from the model"
exit 1
fi
# Process the result
process_result "$response" "$final_prompt"
}
# Process and display the LLM output
function process_result() {
local llm_response="$1"
local prompt="$2"
# Escape special characters for AppleScript but preserve newlines
local escaped_response=$(echo "$llm_response" | sed 's/\\/\\\\/g' | sed 's/"/\\"/g')
# exit if the result is empty
if [ -z "$llm_response" ]; then
echo "Error: Empty response from LLM. Check model configuration."
exit 1
fi
# Display LLM response heading only in verbose mode
if [ "$VERBOSE_MODE" = true ]; then
log_verbose "LLM response:"
fi
# Always show the actual response
echo "$llm_response"
# Copy to clipboard with preserved newlines
# Use printf to maintain newlines when passing to AppleScript
printf '%s' "$escaped_response" | pbcopy
osascript -e "display notification \"LLM results copied to clipboard\""
# Create temporary markdown file with preserved newlines
local temp_file="$(mktemp).md"
printf "%s\n---\n%s" "$prompt" "$llm_response" > "$temp_file"
# Display results according to preference
case "$RESULT_VIEWER_APP" in
"Terminal")
osascript -e "tell application \"Terminal\"
do script \"glow -p ${temp_file} && exit\"
end tell"
;;
"iTerm2")
osascript -e "tell application \"iTerm2\"
create window with default profile
tell current session of current window
write text \"glow -p ${temp_file} && exit\"
end tell
end tell"
;;
"Warp")
open -a /Applications/Warp.app "${temp_file}"
;;
*)
;;
esac
}
#==========================================================
# MAIN SCRIPT
#==========================================================
# Initialize variables
VERBOSE_MODE=false
# Check for required dependencies
verify_dependencies
# Initialize the prompt templates
init_prompt_templates
# Parse command line arguments
parse_arguments "$@"
# If verbose mode is on, show configuration information
if [ "$VERBOSE_MODE" = true ]; then
echo "[DEBUG] Using model: $SELECTED_MODEL"
echo "[DEBUG] Using command: $LLM_COMMAND $LLM_ATTR"
echo "[DEBUG] Language detection model: $LANGUAGE_DETECTION_MODEL"
echo "[DEBUG] Default prompt template: $DEFAULT_PROMPT_TEMPLATE"
fi
# Select a template if not specified in command line
select_prompt_template
# Show template info in verbose mode
if [ "$VERBOSE_MODE" = true ]; then
echo "[DEBUG] Selected template: $SELECTED_TEMPLATE"
echo "[DEBUG] Template content:"
echo "${PROMPT_TEMPLATES[\"$SELECTED_TEMPLATE\"]}"
fi
# Process the input with the selected template
process_with_template
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment