news

Dec 02, 2024 I am joining Microsoft as a Senior Researcher. :sparkles: :smile:
Nov 01, 2024 Our paper One-Step Diffusion Policy: Fast Visuomotor Policies via Diffusion Distillation was public at Nvidia Website. The work shows the broad potential of diffusion distillation for robotics.
Nov 01, 2024 We share a series of diffusion distillation works:
  • Score identity distillation: Exponentially fast distillation of pretrained diffusion models for one-step generation [ICML 2024] A fundamental distillation technique SiD through Fisher Divergence.
  • One-Step Diffusion Policy: Fast Visuomotor Policies via Diffusion Distillation Coupling with CFG, SiD works well in text-to-image one-step generation.
  • Adversarial Score identity Distillation: Rapidly Surpassing the Teacher in One Step Introduce the data dependency to eliminate pretraining bias and further boost the performance of SiD.
Oct 01, 2024 Our paper Diffusion Policies creating a Trust Region for Offline Reinforcement Learning was published at NeurIPS 2024 and the code was released on Github.
Mar 15, 2024 I will join NVIDIA Deep Imagination Research group led by Ming-Yu Liu for 2024 summer internship. :sparkles: :smile:
Jan 01, 2024
  1. Our new paper Relative Preference Optimization: Enhancing LLM Alignment through Contrasting Responses across Identical and Diverse Prompts is now public on ArXiv and the code has been publicly released on Github.
  2. Continue the Part-time Internship with Microsoft GenAI team for Fall 2023 and Spring 2024.
Jan 01, 2024
  1. Our paper In-Context Learning Unlocked for Diffusion Models has been accepted by NeurIPS 2023 and the code has been publicly released on Github with diffusers supported.
  2. Our paper Patch Diffusion: Faster and More Data-Efficient Training of Diffusion Models has been accepted by NeurIPS 2023 and the code has been publicly released on Github.
May 01, 2023
  1. Our new paper In-Context Learning Unlocked for Diffusion Models was public on arXiv and the code was publicly released on Github.
  2. Our new paper Patch Diffusion: Faster and More Data-Efficient Training of Diffusion Models was public on arXiv and the code will be released soon.
  3. I am joining Microsoft Azure AI team for 2023 summer internship. :sparkles: :smile:
Jan 30, 2023
  1. Our new paper Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning was accepted by ICLR 2023 and the code was publicly released on Github.
  2. Our new paper Diffusion-GAN: Training GANs with Diffusion was accepted by ICLR 2023 and the code was publicly released on Github.
  3. Our new paper Probabilistic Conformal Prediction Using Conditional Random Samples was accepted by AISTATS 2023 and the code was publicly released on Github.
Sep 15, 2022 Joined Microsoft Azure AI team for my part-time internship for Fall 2022 and Spring 2023. :sparkles: :smile:
Aug 25, 2022 Our new paper Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning was public on arXiv and the code will be publicly released soon.
Jun 21, 2022 Our new paper Diffusion-GAN: Training GANs with Diffusion was public on arXiv and the code was released on Github.
Jun 20, 2022 Our new paper Probabilistic Conformal Prediction Using Conditional Random Samples was accepted as Spotlight presentation at ICML Distribution-Free Uncertainty Quantification, 2022! The pdf is public on arXiv and the code was released on Github.
May 23, 2022 Joined Twitter Cortex RecSys Research team for my summer 2022 internship. :sparkles: :smile: