Takahiro Yoshida ワーキングペーパー一覧に戻る

  • Gaussian Hierarchical Latent Dirichlet Allocation: Bringing Polysemy Back

    Abstract

    Topic models are widely used to discover the latent representation of a set of documents. The two canonical models are latent Dirichlet allocation, and Gaussian latent Dirichlet allocation, where the former uses multinomial distributions over words, and the latter uses multivariate Gaussian distributions over pre-trained word embedding vectors as the latent topic representations, respectively. Compared with latent Dirichlet allocation, Gaussian latent Dirichlet allocation is limited in the sense that it does not capture the polysemy of a word such as “bank.” In this paper, we show that Gaussian latent Dirichlet allocation could recover the ability to capture polysemy by introducing a hierarchical structure in the set of topics that the model can use to represent a given document. Our Gaussian hierarchical latent Dirichlet allocation significantly improves polysemy detection compared with Gaussian-based models and provides more parsimonious topic representations compared with hierarchical latent Dirichlet allocation. Our extensive quantitative experiments show that our model also achieves better topic coherence and held-out document predictive accuracy over a wide range of corpus and word embedding vectors.

    Introduction

    Topic models are widely used to identify the latent representation of a set of documents. Since latent Dirichlet allocation (LDA) [4] was introduced, topic models have been used in a wide variety of applications. Recent work includes the analysis of legislative text [24], detection of malicious websites [33], and analysis of the narratives of dermatological disease [23]. The modular structure of LDA, and graphical models in general [17], has made it possible to create various extensions to the plain vanilla version. Significant works include the correlated topic model (CTM), which incorporates the correlation among topics that co-occur in a document [6]; hierarchical LDA (hLDA), which jointly learns the underlying topic and the hierarchical relational structure among topics [3]; and the dynamic topic model, which models the time evolution of topics [7].

     

     

    WP017

PAGE TOP