export REPO=/p/project/laionize/cherti1/Megatron-LM # MODIFY!
git clone https://github.com/bigcode-project/Megatron-LM.git $REPO
cd $REPO
ml Stages/2024 GCC/12.3.0 OpenMPI CUDA/12 PyTorch/2.1.0-CUDA-12-ALPHA
python -m venv $REPO/env
source $REPO/env/bin/activate
pip install -U pip
Yoav Goldberg, April 2023.
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much
The Verge: "Meta’s powerful AI language model has leaked online — what happens now?"
Could you confirm that you downloaded the LLaMA series from 4chan? Were you able to get it running yourself or did you just repackage the download? (I was a bit confused reading your tweets about that what exactly you'd done there, so if you're able to explain that, it'd be great)
I downloaded it from Facebook, actually. You can find some details here.
Basically, the sequence of events was:
Audience: I assume you heard of chatGPT, maybe played with it a little, and was imressed by it (or tried very hard not to be). And that you also heard that it is "a large language model". And maybe that it "solved natural language understanding". Here is a short personal perspective of my thoughts of this (and similar) models, and where we stand with respect to language understanding.
Around 2014-2017, right within the rise of neural-network based methods for NLP, I was giving a semi-academic-semi-popsci lecture, revolving around the story that achieving perfect language modeling is equivalent to being as intelligent as a human. Somewhere around the same time I was also asked in an academic panel "what would you do if you were given infinite compute and no need to worry about labour costs" to which I cockily responded "I would train a really huge language model, just to show that it doesn't solve everything!". We
import types | |
from typing import Union, List, Optional, Callable | |
import diffusers | |
import torch | |
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput | |
@torch.inference_mode() |
import numpy as np | |
from scipy.fftpack import dct | |
def hash_algo(pil_img, size=10): | |
""" | |
Get perceptual hash of the input image. | |
Args: | |
image_array: numpy array that corresponds to the image. |
""" | |
stable diffusion dreaming | |
creates hypnotic moving videos by smoothly walking randomly through the sample space | |
example way to run this script: | |
$ python stablediffusionwalk.py --prompt "blueberry spaghetti" --name blueberry | |
to stitch together the images, e.g.: | |
$ ffmpeg -r 10 -f image2 -s 512x512 -i blueberry/frame%06d.jpg -vcodec libx264 -crf 10 -pix_fmt yuv420p blueberry.mp4 |
First follow https://huggingface.co/docs/huggingface_hub/how-to-upstream#push-files-with-git-lfs
- https://huggingface.co/new-dataset
- git clone https://huggingface.co/datasets/user/dataset
- cd dataset
- put your files in the current folder (probably a mv)
- git lfs track *
- git add .
- git commit -m "dataset"
- git push