Skip to content

Instantly share code, notes, and snippets.

Reinforcement Learning for Language Models

Yoav Goldberg, April 2023.

Why RL?

With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much

@gadenbuie
gadenbuie / find-libs.R
Last active September 8, 2018 22:39 — forked from hrbrmstr/find-libs.R
find libraries used in your R scripts or Rmd documents
find_used_libraries <- function(path = getwd(), include_rmd = FALSE, by_file = FALSE) {
library(tidyverse)
if (!requireNamespace('sessioninfo', quietly = TRUE)) {
install.packages("sessioninfo")
library(sessioninfo)
}
library_pattern <- paste(
"(?:library|require)\\((.+?)\\)", # pkgs via library, require, etc.
"requireNamespace\\(['\"](.+?)['\"].+?\\)",
"([[:alnum:]_]+):{2,3}[[:alnum:]_]+", # pkgs via pkgname::function()
library(tidyverse)
s_parse <- safely(parse) # prevents parse() from borking on malformed R files
list.files("~/projects", pattern=".*\\.[Rr]$", full.names=TRUE, recursive = TRUE) %>% # sub out for your own dir
map(s_parse) %>% # parse!
map("result") %>% # we used safely() so need to get to get to the "result"
discard(is.null) %>% # get rid of empty results
unlist() %>% # we don't care which file the library() calls are in
keep(is.language) %>% # I'm 99% sure only the next line is required but it's not like we're moving slowly

Demo:

Spoiler warning

Spoiler text. Note that it's important to have a space after the summary tag. You should be able to write any markdown you want inside the <details> tag... just make sure you close <details> afterward.

console.log("I'm a code block!");
@baraldilorenzo
baraldilorenzo / readme.md
Last active January 14, 2025 11:07
VGG-16 pre-trained model for Keras

##VGG16 model for Keras

This is the Keras model of the 16-layer network used by the VGG team in the ILSVRC-2014 competition.

It has been obtained by directly converting the Caffe model provived by the authors.

Details about the network architecture can be found in the following arXiv paper:

Very Deep Convolutional Networks for Large-Scale Image Recognition

K. Simonyan, A. Zisserman

@jexchan
jexchan / multiple_ssh_setting.md
Created April 10, 2012 15:00
Multiple SSH keys for different github accounts

Multiple SSH Keys settings for different github account

create different public key

create different ssh key according the article Mac Set-Up Git

$ ssh-keygen -t rsa -C "[email protected]"