Shaoli Huang

myphoto.jpg

I am currently leading the interaction technology R&D team at Astribot, where I focus on the research and development of advanced human robot interaction technologies. My work explores cutting-edge areas including multimodal large models, multimodal collaborative expression, robot agents, and humanoid action generation algorithms. A core mission of my team is to build natural and intelligent human-robot interaction systems, accelerating the real-world deployment of robot products.

Previously, I was a senior researcher at Tencent AI Lab. I obtained my Ph.D. from the University of Technology Sydney under the supervision of Professor Tao Dacheng and later served as a research fellow at the University of Sydney. My research interests emphasize practical applications with a strong focus on digital human and embodied AI technologies, such as video-based human motion capture, motion generation, and motion retargeting. I am passionate about pushing the boundaries of how robots perceive, understand, and interact with people in dynamic environments.

news

Dec 10, 2024 One paper accepted by AAAI 2025! :sparkles:
Nov 05, 2024 One paper accepted by 3DV 2025! :sparkles:
Oct 10, 2024 One spotlight paper accepted by NIPS 2024! :sparkles:
Oct 05, 2024 Three papers accepted by WACV 2025! :sparkles:
Jul 01, 2024 Three papers (one Oral) accepted by ECCV 2024! :sparkles:
Apr 30, 2024 One paper accepted by ACM SIGGRAPH 2024. Featured in Technical Papers Video Trailer! :sparkles:
Feb 27, 2024 One spotlight paper accepted by CVPR 2024! :sparkles:
Jan 16, 2024 One paper accepted by ICLR 2024! :sparkles:

selected publications

  1. ECCV
    crossdiff.gif
    Realistic Human Motion Generation with Cross-Diffusion Models
    Zeping Ren, Shaoli Huang, and Xiu Li
    2024
  2. ECCV
    signavatar.gif
    SignAvatars: A Large-scale 3D Sign Language Holistic Motion Dataset and Benchmark
    Zhengdi Yu, Shaoli Huang, Yongkang Cheng, and Togal Bridal
    2024
  3. SIGGRAPH
    camdm.gif
    Taming Diffusion Probabilistic Models for Character Control
    Rui Chen*, Mingyi Shi*Shaoli Huang, Ping Tan, Taku Komura, and Xuelin Chen
    2024
  4. AAAI
    hutu.gif
    HuTuMotion: Human-Tuned Navigation of Latent Motion Diffusion Models with Minimal Feedback
    Gaoge Han, Shaoli Huang, Mingming Gong, and Jinglei Tang
    2024
  5. ICLR
    tapmo.gif
    TapMo: Shape-aware Motion Generation of Skeleton-free Characters
    Jiaxu Zhang*Shaoli Huang*, Zhigang Tu, Xin Chen, Xiaohang Zhan, YU Gang, and Ying Shan
    In , 2024
  6. CVPR
    progmogen.gif
    Programmable Motion Generation for Open-Set Motion Control Tasks
    Hanchao Liu, Xiaohang ZhanShaoli Huang, Tai-Jiang Mu, and Ying Shan
    2024
  7. ICCV
    livelyspeaker.gif
    Livelyspeaker: Towards semantic-aware co-speech gesture generation
    Yihao Zhi*, Xiaodong Cun*, Xuelin Chen, Xi Shen, Wen Guo, Shaoli Huang, and Shenghua Gao
    2023
  8. CVPR
    acr.gif
    ACR: Attention collaboration-based regressor for arbitrary two-hand reconstruction
    Zhengdi Yu, Shaoli Huang, Chen Fang, Toby P Breckon, and Jue Wang
    In , 2023
  9. CVPR
    anchor.gif
    Learning anchor transformations for 3d garment animation
    Fang Zhao, Zekun Li, Shaoli Huang, Junwu Weng, Tianfei Zhou, Guo-Sen Xie, Jue Wang, and Ying Shan
    In , 2023
  10. CVPR
    r2et.gif
    Skinned motion retargeting with residual perception of motion semantics & geometry
    Jiaxu Zhang, Junwu Weng, Di Kang, Fang Zhao, Shaoli Huang, Xuefei Zhe, Linchao Bao, Ying Shan, Jue Wang, and Zhigang Tu
    2023