Skip to content

Instantly share code, notes, and snippets.

Created April 12, 2015 16:08
Show Gist options
  • Select an option

  • Save anonymous/f2c906f2afee32cbbbb5 to your computer and use it in GitHub Desktop.

Select an option

Save anonymous/f2c906f2afee32cbbbb5 to your computer and use it in GitHub Desktop.
require 'nn'
input = torch.DoubleTensor{4} --input word index
target = torch.DoubleTensor{6,5,4,2} --target word indices. So we have 1 true context ("6") and 3 negative contexts ("5,4,2")
label = torch.DoubleTensor({1,0,0,0}) --the first label is true sample, rest are neg samples
model1 = nn.Sequential()
model1:add(nn.LookupTable(1000,4)) -- word embedding layer, vocab size is 1000 and each word is 50 dimensional
model2 = nn.Sequential()
model2:add(nn.LookupTable(1000,4)) -- context embedding layer, this will be dot producted with an element of the word embedding layer
sig_layer = nn.SoftMax() -- SoftMax layer to get probabilities
criterion = nn.BCECriterion() -- simple binary case, so using BCE Criterion
dot_prod = model2:forward(input):cmul(model1:forward(input):view(4)) -- results in a tensor where each element is the log odds of being a true sample
probs = sig_layer:forward(dot_prod) --get probabilities
-- get errors
err = criterion:forward(probs, label)
dl_do = criterion:backward(probs, label)
--backpropagate through both lookuptables
model1:zeroGradParameters()
model1:backward(input, dl_do)
model1:updateParameters(0.01) --update word embeddings
model2:zeroGradParameters()
model2:backward(input, dl_do) --update context embeddings
model2:updateParameters(0.01)
@shashanksonkar
Copy link

Hey. Please let me know whether the update paramets be called just for the input tensors for the model1 lookup table or for all the weights in lookup table.

Copy link

ghost commented Nov 10, 2015

I think there is an error in line 14 and 27

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment