Skip to main content
The LipSync Node synchronizes audio with video by automatically adjusting lip movements to match speech or music. It analyzes the audio track and generates realistic mouth animations that align perfectly with the sound, making it ideal for dubbing, voice-over work, multilingual content, and creating expressive character animations.

How to Use

  1. Add the Node:
    • Click the Add (+) button and select LipSync from the Video node category.
  2. Connect Video and Audio:
    • Link a video from another node (such as Generate Video, Import, or Extend).
    • Link an audio track from an audio node or import.
  3. Configure Settings:
    • Select your preferred Model and adjust other parameters from the Properties panel.
  4. Generate:
    • Click Run, and the AI will produce a video with lip movements synchronized to the audio.

Choosing the Right Settings

SettingTypeImpact on Output
ModelDropdownSelects the AI model used for lip-sync generation. Different models balance between accuracy, speed, and naturalness of mouth movements.
SeedNumber InputA fixed number for reproducible results across generations.

Sample Use Cases

Generate or import a video, then replace the original audio with a dubbed version in another language. Use LipSync to automatically adjust the lip movements to match the new dialogue perfectly.
Record or generate a voice-over track, then sync it to existing video footage. LipSync ensures the speaker’s mouth movements align with the new audio for a polished, professional result.
Pair generated character videos with custom audio to create expressive animations. LipSync automatically generates realistic mouth movements that match the speech, dialogue, or singing.

LipSync Models

Visit Video Models to explore all available models and find the one that fits your needs for creating or transforming videos.