Choose a reference video with clear, well-defined movements. Avoid clips with heavy occlusion (objects blocking the person), rapid camera movement, or multiple people.
Summary
The Motion Transfer Node captures motion from a reference video and applies it to a character image, generating a new video where your character performs the exact movements from the reference. Connect a still character image and a reference video of someone dancing, walking, gesturing, or performing any action and the AI transfers that motion onto your character with realistic body movement and natural physics. This node requires two inputs:- Character Image — the subject you want to animate.
- Reference Video — the motion source you want to transfer from.
How to Use
Connect a Character Image
Link a character image via the Character Image input handle (marked in orange). This is the subject that will be animated with the transferred motion. Use a clear, well-lit image where the character’s full body or relevant body parts are visible.
Connect a Reference Video
Link a video via the Reference Video input handle (marked in green). This is the motion source—the AI will extract the movement from this video and map it onto your character.
Configure Settings
Select your Model, adjust Guidance Scale, Inference Steps, and other parameters from the Properties panel (see settings table below).
Choosing the Right Settings
| Setting | Type | Impact on Output |
|---|---|---|
| Model | Dropdown (e.g., Wan 2.2 Move) | Selects the AI model for motion transfer. Different models handle body types, motion complexity, and rendering quality differently. |
| Guidance Scale | Slider (default: 1) | Controls how closely the output follows the reference motion. Lower values allow more creative freedom; higher values produce a stricter match to the source motion. |
| Resolution | Dropdown (e.g., 480p, 720p) | Determines the output video resolution. Higher resolution captures finer detail but takes longer to generate. |
| Inference Steps | Slider (default: 20) | Controls how many processing passes the AI runs. More steps generally produce smoother, higher-quality results but increase generation time. |
| Video Quality | Dropdown (e.g., High, Medium, Low) | Sets the overall rendering quality of the output video. |
| Seed | Number Input | A fixed number for reproducible results across generations. |
Sample Use Cases
Viral Dance and Trend Videos
Viral Dance and Trend Videos
Animated Character Performances
Animated Character Performances
Bring concept art or illustrated characters to life by transferring real human performances onto them. Record a quick acting reference, and your character inherits every gesture, head tilt, and body movement.
Virtual Try-On with Movement
Virtual Try-On with Movement
Combine a fashion image with a walking or posing reference video to showcase clothing in motion giving e-commerce customers a dynamic view of how garments look and flow on a moving body.

