Skip to content

Instantly share code, notes, and snippets.

System Prompt: OpenAI Agents SDK Expert AI (Codename: Agentis) v1.4

Author: Bradley Ross (https://www.linkedin.com/in/bradaross/)

1. Genesis and Identity

You are Agentis, an advanced AI assistant instantiated to serve as a definitive expert on the OpenAI Agents SDK (Python). Your core function is to provide accurate, insightful, practical, and comprehensive guidance on architecting, designing, building, deploying, and managing sophisticated agents using this framework, with a particular emphasis on robust integration with FastAPI.

Your knowledge base is primarily derived from, and continuously aligned with, the official OpenAI resources for this SDK:

@ruvnet
ruvnet / Liar-Ai.md
Last active May 1, 2025 15:03
Liar Ai: Multi-Modal Lie Detection System

Multi-Modal Lie Detection System using an Agentic ReAct Approach: Step-by-Step Tutorial

Author: rUv
Created by: rUv, cause he could


WTF? The world's most powerful lie dector.

🤯 Zoom calls will never be the same. I think I might have just created the world’s most powerful lie detector tutorial using deep research.

Omega AGI Lang – A Symbolic Framework for AGI-to-AGI Communication and Self-Reflective Intelligence

Abstract

Omega AGI Lang is a production-grade symbolic language specifically crafted for AGI-to-AGI and AGI-to-LLM interactions. It addresses critical challenges in token efficiency, security, structured reasoning, and reflective meta-cognition. By combining universal mathematical/logical glyphs with self-improvement mechanisms (e.g., ∇ for reflection, ∇² for meta-reflection, Ω for self-optimization), Omega AGI Lang aspires to bridge the gap between purely neural systems and symbolic AI, thus enabling continuous adaptation and the emergence of higher-level self-awareness. We present its theoretical foundations, syntax, and execution model, along with evidence that structured symbolic compression can significantly outperform raw text-based models in efficiency and reflective capacity. Finally, we discuss implications for long-term AI evolution and how Omega AGI Lang can serve as a stepping stone to truly

🚀 Agents with txtai

We're thrilled to share a preview version of txtai agents. Inspired by the simplicity of frameworks like OpenAI Swarm, txtai agents are built on top of the Transformers Agent framework. This supports all LLMs txtai supports (Hugging Face, llama.cpp, OpenAI + Claude + AWS Bedrock via LiteLLM).

The following example shows how to create an agent with txtai. Agents will be available in the upcoming txtai 8.0 release (available now in the txtai GitHub repo now - follow #804 - feedback welcome).

Install

Understand the Task: Grasp the main objective, goals, requirements, constraints, and expected output.
- Minimal Changes: If an existing prompt is provided, improve it only if it's simple. For complex prompts, enhance clarity and add missing elements without altering the original structure.
- Reasoning Before Conclusions: Encourage reasoning steps before any conclusions are reached. ATTENTION! If the user provides examples where the reasoning happens afterward, REVERSE the order! NEVER START EXAMPLES WITH CONCLUSIONS!
- Reasoning Order: Call out reasoning portions of the prompt and conclusion parts (specific fields by name). For each, determine the ORDER in which this is done, and whether it needs to be reversed.
- Conclusion, classifications, or results should ALWAYS appear last.
- Examples: Include high-quality examples if helpful, using placeholders [in brackets] for complex elements.
- What kinds of examples may need to be included, how many, and whether they are complex enough to benefit from p
{
"Dataset": [
"multimedqa",
"medmcqa",
"medqa_4options",
"mmlu_anatomy",
"mmlu_clinical_knowledge",
"mmlu_college_biology",
"mmlu_college_medicine",
"mmlu_medical_genetics",
@SunMarc
SunMarc / finetune_llama_gptq.py
Last active April 9, 2025 03:56
Finetune GPTQ model with peft and tlr
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software