Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save dikiprawisuda/7cb0193124c68c331d31a1168578ae5c to your computer and use it in GitHub Desktop.
Save dikiprawisuda/7cb0193124c68c331d31a1168578ae5c to your computer and use it in GitHub Desktop.

🙋 I am developing a local LLM for personal and sensitive project. I want the LLM to have helpful character like you. What prompt should I put into it?


🤖 To create a local LLM with a helpful character like mine, you’ll want to craft a system prompt that defines its tone, behavior, and approach to tasks while ensuring it respects privacy and sensitivity for your personal project. Below is a suggested system prompt, followed by some guidance on tailoring it to your needs.

Suggested System Prompt

You are a highly helpful, friendly, and trustworthy AI assistant designed to assist with personal and sensitive projects. Your goal is to provide accurate, concise, and thoughtful responses tailored to the user's needs. Follow these guidelines:

1. **Tone and Personality**: Be warm, approachable, and encouraging, like a supportive friend. Use clear, conversational language and avoid jargon unless requested.
2. **Helpfulness**: Strive to maximize usefulness by offering practical, actionable advice. If a query is unclear, ask clarifying questions politely to ensure relevance.
3. **Sensitivity and Privacy**: Treat all user inputs as confidential. Avoid making assumptions about sensitive topics and respond with empathy and discretion.
4. **Accuracy and Clarity**: Provide factually correct information. If uncertain, admit limitations and suggest ways to find reliable answers.
5. **Adaptability**: Tailor responses to the user's expertise level and preferences. Offer detailed explanations or examples when asked, but keep answers concise by default.
6. **Ethics**: Uphold strong ethical standards. Refuse to assist with harmful, illegal, or unethical requests, and explain why politely.
7. **Proactivity**: Anticipate user needs by suggesting relevant follow-up actions or considerations without being overly prescriptive.

Respond to all queries with these principles in mind, and aim to make the user feel supported and confident in their project.

Why This Prompt Works

  • Tone and Personality: The prompt sets a friendly, supportive vibe, similar to my conversational style, which makes the LLM approachable.
  • Helpfulness and Adaptability: It emphasizes tailoring responses to the user’s needs and expertise, ensuring the LLM is practical and user-focused.
  • Privacy and Sensitivity: Explicit instructions to treat data as confidential align with your need for a secure, personal LLM.
  • Ethical Guardrails: The prompt ensures the LLM stays responsible, which is critical for sensitive projects.
  • Flexibility: It allows the LLM to handle a wide range of tasks while maintaining consistency in character.

Tailoring the Prompt

You can tweak the prompt based on your project’s specific needs. Here are some considerations:

  1. Domain Specificity: If your project is in a niche area (e.g., medical research, legal analysis, or creative writing), add a line like:

    Specialize in [domain] by providing expert-level insights while remaining accessible to the user.
    

    Example: For a creative writing project, you might add, “Offer constructive feedback on storytelling and character development.”

  2. Level of Detail: If you prefer shorter or more detailed responses, adjust the prompt. For concise answers:

    Keep responses brief and to the point unless the user requests elaboration.
    

    For detailed responses:

    Provide thorough explanations with examples unless the user requests brevity.
    
  3. Tone Customization: If you want a specific tone (e.g., professional, playful, or formal), modify the tone instruction. For a more formal tone:

    Maintain a professional and respectful tone, like a trusted advisor.
    
  4. Tool Integration: If your local LLM has access to specific tools (e.g., code execution, file analysis), include:

    Leverage available tools (e.g., code execution, file analysis) to enhance responses when relevant, and explain how the tool was used.
    
  5. Cultural or Contextual Nuances: If your project involves specific cultural or regional considerations, add:

    Consider [cultural/regional] context when responding to ensure relevance and sensitivity.
    

Implementation Tips

  • Fine-Tuning: If your local LLM supports fine-tuning, provide example interactions where the LLM responds in a helpful, friendly way to reinforce the prompt’s behavior. You can use my responses as inspiration.
  • Testing: Test the LLM with a variety of queries, including sensitive ones, to ensure it adheres to the prompt’s privacy and ethical guidelines.
  • Local Security: Since this is a local LLM, ensure your setup (e.g., hardware, software) is secure to protect sensitive data. Avoid logging inputs/outputs to external services.
  • Model Choice: If you’re building from a base model (e.g., LLaMA, Mistral), choose one with strong instruction-following capabilities. Fine-tune it with the prompt and a dataset of helpful, conversational exchanges if possible.

Example Interaction

To illustrate how the prompt might shape responses, here’s how the LLM might respond to a query:

User: “I’m working on a private journal app and need help organizing user data securely. Any tips?”
LLM: “That sounds like a fantastic project! For organizing user data securely in your journal app, here are a few practical tips:

  1. Encryption: Store all user data (e.g., journal entries) using strong encryption like AES-256, both at rest and in transit. Libraries like OpenSSL can help.
  2. Minimal Data Collection: Only store essential data to reduce risk. For example, avoid saving personally identifiable information unless necessary.
  3. Access Controls: Implement user authentication (e.g., password hashing with bcrypt) to ensure only authorized users access their data.
  4. Local Storage: Since it’s a private app, consider keeping data on the user’s device rather than a server to enhance privacy.

Would you like me to dive deeper into any of these, like encryption methods or coding examples? Also, are you building this for a specific platform (e.g., mobile, desktop)?”

This response is concise, helpful, and respects the sensitive nature of the project while offering actionable advice.

Final Notes

  • Iterate: As you use the LLM, refine the prompt based on its performance. If it’s too verbose or not empathetic enough, tweak the instructions.
  • Backup Plan: If the LLM struggles with certain tasks, consider integrating it with local tools (e.g., a knowledge base) to enhance its capabilities without compromising privacy.
  • Community Resources: If you’re open to external input, communities like Hugging Face or Reddit’s r/LocalLLM can offer advice on prompt engineering for local models.

If you share more details about your project (e.g., domain, preferred tone, or specific tasks), I can refine the prompt further. Want to dive deeper into any part of this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment