Skip to content

Instantly share code, notes, and snippets.

View m3hrdadfi's full-sized avatar
🎯
Focusing

Mehrdad Farahani m3hrdadfi

🎯
Focusing
View GitHub Profile
@m3hrdadfi
m3hrdadfi / sample.txt
Created February 23, 2024 14:37
Path of AST
public class Fibonacci {
public static long fib(int n) {
if (n <= 1) return n;
else return fib(n-1) + fib(n-2);
}
public static void main(String[] args) {
int N = Integer.parseInt(args[0]);
for (int i = 1; i <= N; i++)
System.out.println(i + ": " + fib(i));
@m3hrdadfi
m3hrdadfi / heatmap.py
Last active November 10, 2023 12:46
heatmap using plotly
import plotly.graph_objects as go
def heat_map(x, y, z):
"""
Generates an interactive heat map using Plotly.
This function creates a heat map visualization with the provided x and y axis labels and z axis values. The colors of the map are set to the 'Viridis' scale. The layout of the plot is configured with titles and axes properties, such as the number of ticks, tick text, and tick font properties. The size of the heat map is automatically adjusted based on the input data. Finally, the heat map is displayed in the output.
Args:
x (list of str): A list of strings representing the labels on the x-axis (n,).
@m3hrdadfi
m3hrdadfi / gpu.py
Last active June 30, 2022 11:02
GPU Memory
import subprocess
def get_gpu_memory():
# source: https://stackoverflow.com/a/59571639
command = "nvidia-smi --query-gpu=memory.free --format=csv"
memory_free_info = subprocess.check_output(command.split()).decode('ascii').split('\n')[:-1][1:]
memory_free_values = [round(int(x.split()[0]) / 1000) for i, x in enumerate(memory_free_info)]
return memory_free_values
@m3hrdadfi
m3hrdadfi / word_pooling.ipynb
Last active June 3, 2022 13:29
Word Pooling
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@m3hrdadfi
m3hrdadfi / example.py
Last active June 2, 2022 09:10
CE HF Transformer
import torch
n_cls = 5
a = torch.rand((1, 3, n_cls))
b = torch.tensor([[0, 1, 2]])
print(a.shape)
print(b.shape)
# > torch.Size([1, 3, 5])
# > torch.Size([1, 3])
@m3hrdadfi
m3hrdadfi / pbar.py
Created May 13, 2022 12:49
Progress bar
import tqdm
import time
# with number of iterations
n = 10
pbar = tqdm.tqdm(total=n)
for t in range(n):
pbar.set_description(f'Your information is chaning {t+1}')
# do some computation
@m3hrdadfi
m3hrdadfi / bot.py
Created May 10, 2022 09:41
DialogGPT Bot
import torch
i = 0
maxlen = 1024
while True:
user_input = input('>> User: ').strip()
if user_input.lower() == "q":
break
@m3hrdadfi
m3hrdadfi / summary.py
Last active May 10, 2022 09:21
NLP Summarization
input_ids = tokenizer('summarize: ' + text.lower(),
return_tensors='pt').input_ids.to(model.device)
output = model.generate(
input_ids,
max_length=200,
num_beams=8,
num_beam_groups=4, # based on this paper, https://arxiv.org/pdf/1610.02424.pdf
no_repeat_ngram_size=2
)
@m3hrdadfi
m3hrdadfi / sync_to_space.yml
Last active August 16, 2023 14:56
Sync To Hugging Face Space
name: Sync to Hugging Face space
on:
push:
branches: [main]
# to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
@m3hrdadfi
m3hrdadfi / run.sh
Last active July 15, 2021 14:09
Torch XLA - TPU Pod
accelerate launch run_mlm.py \
--dataset_name="wikitext" \
--dataset_config_name="wikitext-2-raw-v1" \
--model_name_or_path="albert-base-v2" \
--output_dir="/path/to/output" \
--max_seq_length=256 \
--per_device_train_batch_size=16 \
--per_device_eval_batch_size=16 \
--line_by_line \
--pad_to_max_length