Music-driven dance generation has garnered significant attention due to its wide range of industrial applications, particularly in the creation of group choreography. During the group dance generation process, however, most existing methods still face three primary issues: multi-dancer collisions, single-dancer foot sliding and abrupt swapping in the generation of long group dance. In this paper, we propose TCDiff++, a music-driven end-to-end framework designed to generate harmonious group dance. Specifically, to mitigate multi-dancer collisions, we utilize a dancer positioning embedding to better maintain the relative positioning among dancers. Additionally, we incorporate a distance-consistency loss to ensure that inter-dancer distances remain within plausible ranges. To address the issue of single-dancer foot sliding, we introduce a swap mode embedding to indicate dancer swapping patterns and design a Footwork Adaptor to refine raw motion, thereby minimizing foot sliding. For long group dance generation, we present a long group diffusion sampling strategy that reduces abrupt position shifts by injecting positional information into the noisy input. Furthermore, we integrate a Sequence Decoder layer to enhance the model's ability to selectively process long sequences. Extensive experiments demonstrate that our TCDiff++ achieves state-of-the-art performance, particularly in long-duration scenarios, ensuring high-quality and coherent group dance generation.
Our end-to-end TCDiff++ framework comprises two key components: the Group Dance Decoder (GDD) and the Footwork Adaptor (FA). The GDD initially generates a raw motion sequence without trajectory overlap based on the given music. Subsequently, the FA refines the foot movements by leveraging the positional information of the raw motion, producing an adapted motion with improved footstep actions to reduce foot sliding. Finally, the adapted footstep movements are incorporated into the raw motion, yielding a harmonious dance sequence with stable footwork and less dancer collisions. Compared to the previous two-stage version, TCDiff++ requires only a single training stage, demonstrating better footwork-motion coherence performance.
Our Long Group Diffusion Sampling (LGDS) method initially generates segments with partial overlap, which are then merged to form a complete sequence. Unlike naive sampling, LGDS enforces consistency during the input phase rather than the sampling phase. This approach reduces randomness and ensures cleaner positional information during generation, thereby reducing abrupt swap.
User study based on four criteria: motion realism, music-motion correlation, formation aesthetics, and harmony of dancers. Our model has garnered greater user favor, showcasing our superiority in aesthetic appeal.