Improving performance of topic models by variable grouping
Evgeniy Bart
Topic models have a wide range of applications, including modeling of text documents, images, user preferences, product rankings, and many others. However, learning and inference in topic models may become difficult, especially for large problems. The reason is that inference techniques such as Gibbs sampling often converge to poor models due to the abundance of local minima in large datasets. In this paper, we propose a general method of improving the convergence properties of topic models. The method, called `grouping transform', works by introducing auxiliary variables which represent assignments of the original model variables to groups. As a result, it becomes possible to resample an entire group of variables at a time. This allows the sampler to make larger state space moves, thus improving performance. The proposed ideas are illustrated on several topic models and several text and image datasets. We show that the grouping transform significantly improves performance over standard models.