IRG-MotionLLM: Interleaving Motion Generation, Assessment and Refinement for Text-to-Motion Generation
Recent advances in motion-aware large language models have shown remarkable promise for unifying motion understanding and generation tasks. However, these models typically treat understanding and generation separately, limiting the mutual benefits that could arise from interactive feedback between tasks. In this work, we reveal that motion assessment and refinement tasks act as crucial bridges to enable bidirectional knowledge flow between understanding and generation. Leveraging this insight, we propose Interleaved Reasoning for Motion Generation (IRMoGen), a novel paradigm that tightly couples motion generation with assessment and refinement through iterative text-motion dialogue. To realize this, we introduce IRG-MotionLLM, the first model that seamlessly interleaves motion generation, assessment, and refinement to improve generation performance.
Coming soon.
Coming soon.
Coming soon.
Coming soon.
If you find our work helpful for your research, please consider citing our work.
@article{li2025irg-motionllm,
title={IRG-MotionLLM: Interleaving Motion Generation, Assessment and Refinement for Text-to-Motion Generation},
author={Li, Yuan-Ming and Yang, Qize and Lei, Nan and Fu, Shenghao and Zeng, Ling-An and Hu, Jian-Fang and Wei, Xihan and Zheng, Wei-Shi},
journal={arXiv preprint arXiv:2512.10730},
year={2025}
}- Our models and code are under the Apache License 2.0. Our data is under MIT License.
We sincerely acknowledge and appreciate the exceptional open-source contributions that form the foundation of our work: Motion-Agent, MotionGPT, AToM, MARDM, Text-to-Motion, VLM-R1.
