Deep Learning for NLP from scratch

Nishant Nikhil (~nishnik)




King - Man + Woman = Queen

The most famous example of word vectors paint an optimistic picture where computers can represent word into vectors which can be used to infer similarity. But can we extend it to sentences or to documents? How did word vectors come into existence? What are its utilities?

Though most of the people use Mikolov et al's Word2Vec as a blackbox and train by:

import gensim
sentences = [['content', 'of', 'first', 'sentence'], ['content', 'of', 'second', 'sentence'], ... , ['content', 'of', 'nth', 'sentence']]
model = gensim.models.Word2Vec(sentences)

And never know what is cooking inside the hood. This talk would cover a very basic implementation of Word2Vec, a small tutorial of how to use Gensim to train your own word vectors. Building on this we would build vector representation of sentences. We would meanwhile learn about the novelty of classical and deep learning techniques.
After learning all this we would explore the application of Word Movers' Distance for Information retrieval.
Though these terms sound new, but this talk would build from the very basics(arrays as vectors) and myself being a programmer, along with Deep Learning enthusiast, would focus more on a progammer's perspective.

Slides: slides
IPython notebook: repo link

Coverage of the talk:

  1. Introduction to NLP -> 5 minutes
  2. Tokenization, Stemming and Lemmatization -> 15 minutes (5-7 minutes of Hands on session)
  3. Brief intro of POS and NER -> 5 minutes
  4. Word Embeddings (Theory and Motivation) -> 5 minutes
  5. Word Embeddings (Hands on)
    1. Basic Implementation -> 10 minutes
    2. Gensim based Implementation (Meanwhile explaining the possible use cases) -> 10 minutes
  6. Introduction to Deep Learning and exciting stuff for future (10 minutes) (60 minutes over here)
  7. Small intoduction to Keras (Hands on) (5 minutes)
  8. Basics about one-hot encoding, and explaining the hello world of neural networks (5 minutes)
  9. Using Keras for learning word embeddings and a glimpse of Transfer Learning (Hands on) (20 minutes)
  10. Introduction to Sentence embedding (5 minutes)
  11. Word Movers' Distance for Information Retrieval (10 minutes)
  12. Remaining as buffer time


The participants should have interest in Natural Language Processing. The talk would be basically from scratch, but comfortableness with linear algebra would help.

Installed libraries:

  • NLTK
  • Gensim
  • Keras

Speaker Info:

The speaker is a fourth year undergraduate student at IIT Kharagpur. A robotics and deep learning enthusiast, he spends his time writing blogs about Artificial Intelligence where he was a top writer till June 2017 or teaching humanoid robots to walk and kick at the KRSSG Lab otherwise maintaining the college wiki.

He is currenty a GSoC mentor for SymEngine/SymPy where he was a GSoC student in 2016. Furthermore, he has worked on Cross Lingual Word embeddings at UFAL Prague, generating pattern of birds' songs at ETH Zurich and hierarchical embeddings at Stony Brooks NYC.

Speaker Links:

Section: Data Analysis and Visualization
Type: Workshops
Target Audience: Beginner
Last Updated:

Great stuff!

Himanshu Mishra (~OrkoHunter)

This looks cool! Looking forward for attending it.

Pranit Bauva (~pranit)


RAHUL MISHRA (~rahul55)

Looking forward to see you soon.


Hi can you please upload the slides for the talk so that your proposal can be reviewed.

Pradhvan Bisht (~cyber_freak)

Login to add a new comment.