Adversarial network for natural language synthesis
Rajib Biswas (~rajib) |
The key issue with generative task is about deciding what a good cost function should be? GAN(Generative adversarial Networks) introduces two networks to solve that. The generator network creates fake samples, and Discriminator network distinguishes them from real samples.
GAN has been predominantly applied in image augmentation. GAN is particularly good at generating continuous samples. Due to this reason, it can’t be used directly for text generation (as it’s sequence of discrete numbers.).
This talk will cover the recent breakthroughs in applying adversarial networks for language generation.
Major developments in GAN for text:
The talk will focus on the following recent advancements in GAN for natural language generation tasks.
- SeqGAN: Policy gradient Reinforcement learning methods
- LeakGAN:Long text generation with leaked information
- Re-parameterization trick for latent variables
GAN for Machine Translation
GAN for Style transfer
GAN for Dialogue Generation
Demo with notebook.
basic familiarity with deep learning and natural language processing.
Draft slides: https://drive.google.com/open?id=1vfuCLMTuBmragf6a30-VzQ4CM9y2xGlv
Rajib is working as Lead Data scientist with Ericsson Research (Global AI Accelerator). He did Masters in Computer Science from BITS-Pilani campus. He has 10 years of industry experience into AI/ML based product development and research. He has applied AI/ML to solve problems related to different domains like finance, telecom, consumer electronics.