Skip to main content
Choose a reference video with clear, well-defined movements. Avoid clips with heavy occlusion (objects blocking the person), rapid camera movement, or multiple people.

Summary

The Motion Transfer Node captures motion from a reference video and applies it to a character image, generating a new video where your character performs the exact movements from the reference. Connect a still character image and a reference video of someone dancing, walking, gesturing, or performing any action and the AI transfers that motion onto your character with realistic body movement and natural physics. This node requires two inputs:
  • Character Image — the subject you want to animate.
  • Reference Video — the motion source you want to transfer from.

How to Use

1

Add the Node

Click the Add (+) button and select Motion Transfer from the Video node category.
2

Connect a Character Image

Link a character image via the Character Image input handle (marked in orange). This is the subject that will be animated with the transferred motion. Use a clear, well-lit image where the character’s full body or relevant body parts are visible.
3

Connect a Reference Video

Link a video via the Reference Video input handle (marked in green). This is the motion source—the AI will extract the movement from this video and map it onto your character.
4

Configure Settings

Select your Model, adjust Guidance Scale, Inference Steps, and other parameters from the Properties panel (see settings table below).
5

Generate

Click Run, and the AI will produce a video of your character performing the motion from the reference video.

Choosing the Right Settings

SettingTypeImpact on Output
ModelDropdown (e.g., Wan 2.2 Move)Selects the AI model for motion transfer. Different models handle body types, motion complexity, and rendering quality differently.
Guidance ScaleSlider (default: 1)Controls how closely the output follows the reference motion. Lower values allow more creative freedom; higher values produce a stricter match to the source motion.
ResolutionDropdown (e.g., 480p, 720p)Determines the output video resolution. Higher resolution captures finer detail but takes longer to generate.
Inference StepsSlider (default: 20)Controls how many processing passes the AI runs. More steps generally produce smoother, higher-quality results but increase generation time.
Video QualityDropdown (e.g., High, Medium, Low)Sets the overall rendering quality of the output video.
SeedNumber InputA fixed number for reproducible results across generations.

Sample Use Cases

Viral Dance and Trend Videos

Grab a trending dance video as your reference and transfer the choreography onto an AI-generated character, brand mascot, or illustrated figure perfect for jumping on social trends without filming yourself.

Animated Character Performances

Bring concept art or illustrated characters to life by transferring real human performances onto them. Record a quick acting reference, and your character inherits every gesture, head tilt, and body movement.

Virtual Try-On with Movement

Combine a fashion image with a walking or posing reference video to showcase clothing in motion giving e-commerce customers a dynamic view of how garments look and flow on a moving body.

Motion Transfer Models

Visit Video Models to explore all available models and find the one that fits your needs for creating or transforming videos.