Adversarial network for natural language synthesis

Rajib Biswas (~rajib)




The key issue with generative task is about deciding what a good cost function should be? GAN(Generative adversarial Networks) introduces two networks to solve that. The generator network creates fake samples, and Discriminator network distinguishes them from real samples.

GAN has been predominantly applied in image augmentation. GAN is particularly good at generating continuous samples. Due to this reason, it can’t be used directly for text generation (as it’s sequence of discrete numbers.).

This talk will cover the recent breakthroughs in applying adversarial networks for language generation.

Major developments in GAN for text:

The talk will focus on the following recent advancements in GAN for natural language generation tasks.

  • SeqGAN: Policy gradient Reinforcement learning methods
  • LeakGAN:Long text generation with leaked information
  • Re-parameterization trick for latent variables

Application/ tasks:

  1. GAN for Machine Translation

  2. GAN for Style transfer

  3. GAN for Dialogue Generation

Demo with notebook.


basic familiarity with deep learning and natural language processing.

Content URLs:

Draft slides:

Speaker Info:

Rajib is working as Lead Data scientist with Ericsson Research (Global AI Accelerator). He did Masters in Computer Science from BITS-Pilani campus. He has 10 years of industry experience into AI/ML based product development and research. He has applied AI/ML to solve problems related to different domains like finance, telecom, consumer electronics.

Speaker Links:


Section: Data Science, Machine Learning and AI
Type: Talks
Target Audience: Intermediate
Last Updated:

Hello Rajib,

Thanks for submitting the proposal. Though your proposal seems mostly complete, please also go through the best practices listed here -

(CFP Co-ordinator)

Abhishek Yadav (~zerothabhishek)

Login to add a new comment.