I am a first-year PhD student at MIT CSAIL working with Antonio Torralba and Phillip Isola. Previously, I worked at Facebook AI Research (FAIR) on PyTorch. Before joining Facebook, I studied computer science and statistics at University of California, Berkeley. Since my undergraduate study, I have worked with Professor Stuart J. Russell, Professor Ren Ng, and Professor Alexei A. Efros as a researcher at Berkeley AI Research (BAIR).

Please find my full résumé here.

Publications

  1. Dataset Distillation
    [Project Page] [code] [arXiv]

    Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, Alexei A. Efros

    We attempt to distill the knowledge from a large training dataset into a small one. The idea is to synthesize a small number of data points that do not need to come from the correct data distribution, but will, when given to the learning algorithm as training data, approximate the model trained on the original data. For example, we show that it is possible to compress 60,000 MNIST training images into just 10 synthetic distilled images and achieve close to original performance with only a few steps of gradient descent, given a fixed network initialization. Experiments on multiple datasets show the advantage of our approach compared to alternative methods in various initialization settings and with different learning objectives.

    dataset_distillation_fixed_mnist

  2. Meta-Learning MCMC Proposals
    [NIPS 2018] [arXiv]

    Tongzhou Wang, Yi Wu, David A. Moore, Stuart J. Russell

    Automated MCMC proposal construction by training neural networks as fast approximations to block Gibbs conditionals. The learned proposals generalize to occurrences of common structural motifs both within a given model and across models, allowing for the construction of a library of learned inference primitives that can accelerate inference on unseen models with no model-specific training required.

    Oral presentation at ICML 2017 AutoML workshop.

    meta_learning_mcmc_gmm_trace

  3. Learning to Synthesize a 4D RGBD Light Field from a Single Image
    [ICCV 2017] [arXiv]

    Pratul Srinivasan, Tongzhou Wang, Ashwin Sreelal, Ravi Ramamoorthi, Ren Ng

    A machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset. Our algorithm is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.

    light-field-synthesis-pipeline

Research Projects

  1. Improved Training of Cycle-Consistent Adversarial Networks

    Tongzhou Wang with research group of Prof. Alexei A. Efros

    On-going project on improving CycleGAN by designing better formulation and/or automatic dataset selection algorithms.

    Relevant vision course project: CycleGAN with Better Cycles [paper, slides].

  2. Analysis on Punctuations in Online Reviews [paper, poster]

    Tongzhou Wang

    Analysis on punctuation structures in positive and negative online Steam reviews with an HMM model where the auxiliary sentence type variables are hidden and conditional probabilities of observed punctuations are modeled as from Markov chains based on the sentence types.

    Course project of graduate-level statistical learning theory class.

    light-field-synthesis-pipeline