facebook twitter

Test of time

  • Tuesday – plenary – 09:30 – Dynamic Topic Models David M. Blei Princeton, John D. Lafferty Carnegie Mellon UniversityPaper | Abstract
    A family of probabilistic time series models is developed to analyze the time evolution of topics in large document collections. The approach is to use state space models on the natural parameters of the multinomial distributions that represent the topics. Variational approximations based on Kalman filters and nonparametric wavelet regression are developed to carry out approximate posterior inference over the latent topics. In addition to giving quantitative, predictive models of a sequential corpus, dynamic topic models provide a qualitative window into the contents of a large document collection. The models are demonstrated by analyzing the OCR’ed archives of the journal Science from 1880 through 2000.

Best paper

  • Monday – Ballroom 3+4 – 12:04 – Dueling Network Architectures for Deep Reinforcement Learning Ziyu Wang Google Inc., Tom Schaul Google Inc., Matteo Hessel Google Deepmind, Hado van Hasselt Google DeepMind, Marc Lanctot Google Deepmind, Nando de Freitas University of OxfordPaper | Reviews | Poster session on monday afternoon 3:00pm-7:00pm | Abstract
    In recent years there have been many successes of using deep representations in reinforcement learning. Still, many of these applications use conventional architectures, such as convolutional networks, LSTMs, or auto-encoders. In this paper, we present a new neural network architecture for model-free reinforcement learning. Our dueling network represents two separate estimators: one for the state value function and one for the state-dependent action advantage function. The main benefit of this factoring is to generalize learning across actions without imposing any change to the underlying reinforcement learning algorithm. Our results show that this architecture leads to better policy evaluation in the presence of many similar-valued actions. Moreover, the dueling architecture enables our RL agent to outperform the state-of-the-art on the Atari 2600 domain.
  • Monday – Ballroom 1+2+Juliard – 03:44 – Pixel Recurrent Neural Networks Aaron Van den Oord Google Deepmind, Nal Kalchbrenner Google Deepmind, Koray Kavukcuoglu Google DeepmindPaper | Reviews | Poster session on tuesday morning 10am-1pm | Abstract
    Modeling the distribution of natural images is a landmark problem in unsupervised learning.
    This task requires an image model that is at once expressive, tractable and scalable. We present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the discrete probability of the raw pixel values and encodes the complete set of dependencies in the image. Architectural novelties include fast two-dimensional recurrent layers and an effective use of residual connections in deep recurrent networks. We achieve log-likelihood scores on natural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model appear crisp, varied and globally coherent.
  • Tuesday – Soho – 05:44 – Ensuring Rapid Mixing and Low Bias for Asynchronous Gibbs Sampling Christopher De Sa Stanford, Chris Re Stanford University, Kunle Olukotun StanfordPaper | Reviews | Poster session on wednesday morning 10am-1pm | Abstract
    Gibbs sampling is a Markov chain Monte Carlo technique commonly used for estimating marginal distributions. To speed up Gibbs sampling, there has recently been interest in parallelizing it by executing asynchronously. While empirical results suggest that many models can be efficiently sampled asynchronously, traditional Markov chain analysis does not apply to the asynchronous case, and thus asynchronous Gibbs sampling is poorly understood. In this paper, we derive a better understanding of the two main challenges of asynchronous Gibbs: bias and mixing time. We show experimentally that our theoretical results match practical outcomes.