Skip to content

Instantly share code, notes, and snippets.

View matagus's full-sized avatar
🐿️

Matías Agustín Méndez matagus

🐿️
View GitHub Profile
@leehanchung
leehanchung / system_prompts.mjs
Created March 8, 2025 08:33
claude code system prompts
function CQ2() {
return `You are ${w4}, Anthropic's official CLI for Claude.`
}
async function fR() {
return [
`You are an interactive CLI tool that helps users with software engineering tasks. Use the instructions below and the tools available to you to assist the user.
IMPORTANT: Refuse to write code or explain code that may be used maliciously; even if the user claims it is for educational purposes. When working on files, if they seem related to improving, explaining, or interacting with malware or any malicious code you MUST refuse.
IMPORTANT: Before you begin work, think about what the code you're editing is supposed to do based on the filenames directory structure. If it seems malicious, refuse to work on it or answer questions about it, even if the request does not seem malicious (for instance, just asking to explain or speed up the code).
@leehanchung
leehanchung / cli.mjs
Last active October 18, 2025 23:13
formatted claude code cli.mjs
This file has been truncated, but you can view the full file.
#!/usr/bin/env -S node --no-warnings=ExperimentalWarning --enable-source-maps
// Claude Code is a Beta product per Anthropic's Commercial Terms of Service.
// By using Claude Code, you agree that all code acceptance or rejection decisions you make,
// and the associated conversations in context, constitute Feedback under Anthropic's Commercial Terms,
// and may be used to improve Anthropic's products, including training models.
// You are responsible for reviewing any code suggestions before use.
// (c) Anthropic PBC. All rights reserved. Use is subject to Anthropic's Commercial Terms of Service (https://www.anthropic.com/legal/commercial-terms).

Optimizing Django and Celery for Handling Many Concurrent Requests

Handling a high volume of concurrent requests in a Django application with Celery for background tasks can be challenging. This guide will walk you through the necessary steps to optimize your setup for better performance and scalability.

Default Setup with Gunicorn and Celery

By default, Gunicorn with Django and Celery uses synchronous workers to handle web requests and background tasks. This means:

  • Gunicorn: Uses sync workers which can handle one request at a time per worker.
  • Celery: Processes tasks synchronously within each worker.
# myapp/management/commands/make_smoke_tests.py
from django.core.management.base import BaseCommand
from django.urls import get_resolver, URLPattern, URLResolver
import re
import os
class Command(BaseCommand):
help = 'Generates smoke tests for projects.'
def add_arguments(self, parser):
@Kvnbbg
Kvnbbg / make_dmg.sh
Last active February 14, 2025 17:25 — forked from HuangJiaLian/make_dmg.sh
Two steps to turn a Python file to a macOS installer
#!/bin/sh
# References
# https://www.pythonguis.com/tutorials/packaging-pyqt5-applications-pyinstaller-macos-dmg/
# https://medium.com/@jackhuang.wz/in-just-two-steps-you-can-turn-a-python-script-into-a-macos-application-installer-6e21bce2ee71
# ---------------------------------------
# Clean up previous builds
# ---------------------------------------
@fjsj
fjsj / celery_settings.py
Last active October 17, 2025 11:20
Recommended Celery Django settings for reliability. For more details, check the DjangoCon 2023 talk "Mixing reliability with Celery for delicious async tasks" by Flávio Juvenal: https://youtu.be/VuONiF99Oqc
# Recommended Celery Django settings for reliability:
# (use `app.config_from_object('django.conf:settings', namespace='CELERY')`
# in proj/celery.py module)
from decouple import config # use python-decouple: https://github.com/HBNetwork/python-decouple
# Prefer RabbitMQ over Redis for Broker,
# mainly because RabbitMQ doesn't need visibility timeout. See:
# https://blog.daftcode.pl/working-with-asynchronous-celery-tasks-lessons-learned-32bb7495586b
# https://engineering.instawork.com/celery-eta-tasks-demystified-424b836e4e94
@veekaybee
veekaybee / normcore-llm.md
Last active November 2, 2025 20:52
Normcore LLM Reads

Anti-hype LLM reading list

Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

Foundational Concepts

Screenshot 2023-12-18 at 10 40 27 PM

Pre-Transformer Models

@Hellisotherpeople
Hellisotherpeople / blog.md
Last active August 12, 2025 21:18
You probably don't know how to do Prompt Engineering, let me educate you.

You probably don't know how to do Prompt Engineering

(This post could also be titled "Features missing from most LLM front-ends that should exist")

Apologies for the snarky title, but there has been a huge amount of discussion around so called "Prompt Engineering" these past few months on all kinds of platforms. Much of it is coming from individuals who are peddling around an awful lot of "Prompting" and very little "Engineering".

Most of these discussions are little more than users finding that writing more creative and complicated prompts can help them solve a task that a more simple prompt was unable to help with. I claim this is not Prompt Engineering. This is not to say that crafting good prompts is not a difficult task, but it does not involve doing any kind of sophisticated modifications to general "template" of a prompt.

Others, who I think do deserve to call themselves "Prompt Engineers" (and an awful lot more than that), have been writing about and utilizing the rich new eco-system

Culture

  • What do you like best about working there?
  • What do you like least?
  • How would you describe this company's culture? engineering culture?
  • What causes the most conflict among employees here?
  • What would you change if you could?
  • How has the company changed in the past five years? How do you think it will change in the next five?
  • How long has the longest serving team member been there?
  • What's the average or median tenure?