Zhendong Wang

University of Texas at Austin; zhendong.wang@utexas.edu

prof_pic.png

Austin, Texas, US

I am a fourth year PhD student in the department of Statistics & Data Science at the University of Texas at Austin. I am supervised by professor Mingyuan Zhou. Before coming to UT, I completed my Master degree at Columbia University in Data Science major. I received my Bachelor degree in Civil Engineering from Tongji University in China, during which I went to the University of California, Berkeley for one year exchange study.

Research Interests: I have broad research interests in statistical machine learning. Specifically, my research interests lie in:

  • Deep generative models, e.g. diffusion models, GANs, etc.
  • Reinforcement Learning, e.g. online/offline RL, imitation learning, etc.
  • Multimodal Large Language Models, e.g., interleaved text-image generation, preference optimization algorithms, etc.
  • Uncertainty Quantification, e.g. conformal predictions, etc.

I am set to graduate in December 2024. Please don’t hesitate to get in touch if you’re interested in my research or have potential opportunities to discuss.

news

Mar 15, 2024 I will join NVIDIA Deep Imagination Research group led by Ming-Yu Liu for 2024 summer internship. :sparkles: :smile:
Jan 1, 2024
  1. Our new paper Relative Preference Optimization: Enhancing LLM Alignment through Contrasting Responses across Identical and Diverse Prompts is now public on ArXiv and the code has been publicly released on Github.
  2. Continue the Part-time Internship with Microsoft GenAI team for Fall 2023 and Spring 2024.
Jan 1, 2024
  1. Our paper In-Context Learning Unlocked for Diffusion Models has been accepted by NeurIPS 2023 and the code has been publicly released on Github with diffusers supported.
  2. Our paper Patch Diffusion: Faster and More Data-Efficient Training of Diffusion Models has been accepted by NeurIPS 2023 and the code has been publicly released on Github.
May 1, 2023
  1. Our new paper In-Context Learning Unlocked for Diffusion Models was public on arXiv and the code was publicly released on Github.
  2. Our new paper Patch Diffusion: Faster and More Data-Efficient Training of Diffusion Models was public on arXiv and the code will be released soon.
  3. I am joining Microsoft Azure AI team for 2023 summer internship. :sparkles: :smile:
Jan 30, 2023
  1. Our new paper Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning was accepted by ICLR 2023 and the code was publicly released on Github.
  2. Our new paper Diffusion-GAN: Training GANs with Diffusion was accepted by ICLR 2023 and the code was publicly released on Github.
  3. Our new paper Probabilistic Conformal Prediction Using Conditional Random Samples was accepted by AISTATS 2023 and the code was publicly released on Github.

selected publications

  1. Preprint
    Relative Preference Optimization: Enhancing LLM Alignment through Contrasting Responses across Identical and Diverse Prompts
    Yin, Yueqin, Wang, Zhendong, Gu, Yi, Huang, Hai, Chen, Weizhu, and Zhou, Mingyuan
    arXiv preprint arXiv:2402.10958 2024
  2. NeurIPS 2023
    In-Context Learning Unlocked for Diffusion Models
    Wang, Zhendong, Jiang, Yifan, Lu, Yadong, Shen, Yelong, He, Pengcheng, Chen, Weizhu, Wang, Zhangyang, and Zhou, Mingyuan
    Advances in Neural Information Processing Systems 2023
  3. NeurIPS 2023
    Patch Diffusion: Faster and More Data-Efficient Training of Diffusion Models
    Wang, Zhendong, Jiang, Yifan, Zheng, Huangjie, Wang, Peihao, He, Pengcheng, Wang, Zhangyang, Chen, Weizhu, and Zhou, Mingyuan
    Advances in Neural Information Processing Systems 2023
  4. ICLR 2023
    Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning
    Wang, Zhendong, Hunt, J Jonathan, and Zhou, Mingyuan
    International Conference on Learning Representations 2023
  5. ICLR 2023
    Diffusion-GAN: Training GANs with Diffusion
    Wang, Zhendong, Zheng, Huangjie, He, Pengcheng, Chen, Weizhu, and Zhou, Mingyuan
    International Conference on Learning Representations 2023
  6. AISTATS 2023
    Probabilistic Conformal Prediction Using Conditional Random Samples
    Wang, Zhendong*, Gao, Ruijiang*, Yin, Mingzhang*, Zhou, Mingyuan, and Blei, David M
    International Conference on Artificial Intelligence and Statistics 2023 2023
  7. ICML 2020
    Thompson sampling via local uncertainty
    Wang, Zhendong, and Zhou, Mingyuan
    In International Conference on Machine Learning 2020