Skip to content

Toggle service links

Learning word representations with deep learning
Trung Huynh

This event took place on 4th October 2017 at 11:30am (10:30 GMT)
Knowledge Media Institute, Berrill Building, The Open University, Milton Keynes, United Kingdom, MK7 6AA

Learning word representations from unsupervised methods has been recently extensively explored. However the studied methods have been limited to methods based on co-occurrence statistics or windowed bag-of-word shallow neural networks. These methods are usually so computationally efficient that they can be trained with huge corpora that may contain billions of tokens. These bag-of-word models however do not utilise the structural nature of languages for their inferences. We hypothesise that by using structural architectures, specifically recurrent neural networks, derived word representations contain properties learned from the preserved sequential nature of the inputs.


The webcast was open to 300 users
Creative Commons Licence KMi logo