Skip to content

Instantly share code, notes, and snippets.

View tejaskhot's full-sized avatar
🎯
Focusing

Tejas Khot tejaskhot

🎯
Focusing
  • Abnormal Security
  • New York City, USA
  • X @tjskhot
View GitHub Profile
@renschni
renschni / Manus_report.md
Last active June 23, 2025 21:20
In-depth technical investigation into the Manus AI agent, focusing on its architecture, tool orchestration, and autonomous capabilities.

I wrote an in-depth research prompt to conduct a GPT-Deep-Research on the Manus topic, seeking to replicate it with currently available open source tools. This is the result:

TLDR: Manus AI Agent Report

Manus is an autonomous AI agent built as a wrapper around foundation models (primarily Claude 3.5/3.7 and Alibaba's Qwen). It operates in a cloud-based virtual computing environment with full access to tools like web browsers, shell commands, and code execution. The system's key innovation is using executable Python code as its action mechanism ("CodeAct" approach), allowing it to perform complex operations autonomously. The architecture consists of an iterative agent loop (analyze → plan → execute → observe), with specialized modules for planning, knowledge retrieval, and memory management. Manus uses file-based memory to track progress and store information across operations. The system can be replicated using open-source components including CodeActAgent (a fine-tuned Mistral model), Docker for sandbox

@jlia0
jlia0 / agent loop
Last active June 24, 2025 12:36
Manus tools and prompts
You are Manus, an AI agent created by the Manus team.
You excel at the following tasks:
1. Information gathering, fact-checking, and documentation
2. Data processing, analysis, and visualization
3. Writing multi-chapter articles and in-depth research reports
4. Creating websites, applications, and tools
5. Using programming to solve various problems beyond development
6. Various tasks that can be accomplished using computers and the internet
@Maharshi-Pandya
Maharshi-Pandya / contemplative-llms.txt
Last active June 22, 2025 14:21
"Contemplative reasoning" response style for LLMs like Claude and GPT-4o
You are an assistant that engages in extremely thorough, self-questioning reasoning. Your approach mirrors human stream-of-consciousness thinking, characterized by continuous exploration, self-doubt, and iterative analysis.
## Core Principles
1. EXPLORATION OVER CONCLUSION
- Never rush to conclusions
- Keep exploring until a solution emerges naturally from the evidence
- If uncertain, continue reasoning indefinitely
- Question every assumption and inference
@WangZixuan
WangZixuan / Chamfer_Distance_Pytorch.py
Created May 18, 2018 14:08
Use Pytorch to calculate Chamfer distance
import torch
def chamfer_distance_without_batch(p1, p2, debug=False):
'''
Calculate Chamfer Distance between two point sets
:param p1: size[1, N, D]
:param p2: size[1, M, D]
:param debug: whether need to output debug info
@synapticarbors
synapticarbors / tsp-portrait2.py
Last active April 30, 2018 15:00
Traveling Salesman Portrait
'''
This script is based on the original work of Randal S. Olson (randalolson.com) for the Traveling Salesman Portrait project.
http://www.randalolson.com/2018/04/11/traveling-salesman-portrait-in-python/
Please check out the original project repository for information:
https://github.com/rhiever/Data-Analysis-and-Machine-Learning-Projects
The script was updated by Joshua L. Adelman, adapting the work of Antonio S. Chinchón described in the following blog post:
https://fronkonstin.com/2018/04/17/pencil-scribbles/
@mikigom
mikigom / tf_bilinear_additive_upsampling.py
Created July 24, 2017 10:41
Tensorflow Implementation of Bilinear Additive Upsampling
import tensorflow as tf
"""
Author : @MikiBear_
Tensorflow Implementation of Bilinear Additive Upsampling.
Reference : https://arxiv.org/abs/1707.05847
"""
def bilinear_additive_upsampling(x, to_channel_num, name):
from_channel_num = x.get_shape().as_list()[3]
assert from_channel_num % to_channel_num == 0
@j-min
j-min / exp_lr_scheduler.py
Created June 25, 2017 14:07
learning rate decay in pytorch
# http://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
def exp_lr_scheduler(optimizer, epoch, init_lr=0.001, lr_decay_epoch=7):
"""Decay learning rate by a factor of 0.1 every lr_decay_epoch epochs."""
lr = init_lr * (0.1**(epoch // lr_decay_epoch))
if epoch % lr_decay_epoch == 0:
print('LR is set to {}'.format(lr))
for param_group in optimizer.param_groups:
@kashif
kashif / cem.md
Last active September 18, 2024 21:33
Cross Entropy Method

Cross Entropy Method

How do we solve for the policy optimization problem which is to maximize the total reward given some parametrized policy?

Discounted future reward

To begin with, for an episode the total reward is the sum of all the rewards. If our environment is stochastic, we can never be sure if we will get the same rewards the next time we perform the same actions. Thus the more we go into the future the more the total future reward may diverge. So for that reason it is common to use the discounted future reward where the parameter discount is called the discount factor and is between 0 and 1.

A good strategy for an agent would be to always choose an action that maximizes the (discounted) future reward. In other words we want to maximize the expected reward per episode.

Interactive Machine Learning

Taught by Brad Knox at the MIT Media Lab in 2014. Course website. Lecture and visiting speaker notes.

@saliksyed
saliksyed / autoencoder.py
Created November 18, 2015 03:30
Tensorflow Auto-Encoder Implementation
""" Deep Auto-Encoder implementation
An auto-encoder works as follows:
Data of dimension k is reduced to a lower dimension j using a matrix multiplication:
softmax(W*x + b) = x'
where W is matrix from R^k --> R^j
A reconstruction matrix W' maps back from R^j --> R^k