Documents
Introduction

VideoLingo: Connecting the World, Frame by Frame

๐ŸŒŸ Overview (Try VideoLingo Now! (opens in a new tab))

VideoLingo is an all-in-one video translation, localization, and dubbing tool aimed at generating Netflix-quality subtitles. It eliminates stiff machine translations and multi-line subtitles while adding high-quality dubbing, enabling global knowledge sharing across language barriers.

Key features:

  • ๐ŸŽฅ YouTube video download via yt-dlp

  • ๐ŸŽ™๏ธ Word-level subtitle recognition with WhisperX

  • ๐Ÿ“ NLP and GPT-based subtitle segmentation

  • ๐Ÿ“š GPT-generated terminology for coherent translation

  • ๐Ÿ”„ 3-step direct translation, reflection, and adaptation for professional-level quality

  • โœ… Netflix-standard single-line subtitles only

  • ๐Ÿ—ฃ๏ธ Dubbing alignment with GPT-SoVITS and other methods

  • ๐Ÿš€ One-click startup and output in Streamlit

  • ๐Ÿ“ Detailed logging with progress resumption

Difference from similar projects: Single-line subtitles only, superior translation quality, seamless dubbing experience

๐ŸŽฅ Demo

Russian Translation


https://github.com/user-attachments/assets/25264b5b-6931-4d39-948c-5a1e4ce42fa7 (opens in a new tab)

GPT-SoVITS Dubbing


https://github.com/user-attachments/assets/47d965b2-b4ab-4a0b-9d08-b49a7bf3508c (opens in a new tab)

Language Support

Input Language Support(more to come):

๐Ÿ‡บ๐Ÿ‡ธ English ๐Ÿคฉ | ๐Ÿ‡ท๐Ÿ‡บ Russian ๐Ÿ˜Š | ๐Ÿ‡ซ๐Ÿ‡ท French ๐Ÿคฉ | ๐Ÿ‡ฉ๐Ÿ‡ช German ๐Ÿคฉ | ๐Ÿ‡ฎ๐Ÿ‡น Italian ๐Ÿคฉ | ๐Ÿ‡ช๐Ÿ‡ธ Spanish ๐Ÿคฉ | ๐Ÿ‡ฏ๐Ÿ‡ต Japanese ๐Ÿ˜ | ๐Ÿ‡จ๐Ÿ‡ณ Chinese* ๐Ÿ˜Š

*Chinese uses a separate punctuation-enhanced whisper model, for now...

Translation supports all languages, while dubbing language depends on the chosen TTS method.

Installation

Note: To use NVIDIA GPU acceleration on Windows, please complete the following steps first:

  1. Install CUDA Toolkit 12.6 (opens in a new tab)
  2. Install CUDNN 9.3.0 (opens in a new tab)
  3. Add C:\Program Files\NVIDIA\CUDNN\v9.3\bin\12.6 to your system PATH
  4. Restart your computer

Note: For Windows and macOS users, it's recommended to install FFmpeg via package managers (Chocolatey/Homebrew): choco install ffmpeg (Windows) or brew install ffmpeg (macOS). If not installed, the program will download FFmpeg locally.

  1. Clone the repository
git clone https://github.com/Huanshere/VideoLingo.git
cd VideoLingo
  1. Install dependencies(requires python=3.10)
conda create -n videolingo python=3.10.0 -y
conda activate videolingo
python install.py
  1. Start the application
streamlit run st.py

Docker

Alternatively, you can use Docker (requires CUDA 12.4 and NVIDIA Driver version >550), see Docker docs:

docker build -t videolingo .
docker run -d -p 8501:8501 --gpus all videolingo

API

The project supports OpenAI-Like API format and various dubbing interfaces:

  • claude-3-5-sonnet-20240620, gemini-1.5-pro-002, gpt-4o, qwen2.5-72b-instruct, deepseek-coder, ... (sorted by performance)
  • azure-tts, openai-tts, siliconflow-fishtts, fish-tts, GPT-SoVITS

For detailed installation, API configuration, and batch mode instructions, please refer to the documentation: English | ไธญๆ–‡

Current Limitations

  1. WhisperX transcription performance may be affected by video background noise, as it uses wav2vac model for alignment. For videos with loud background music, please enable Voice Separation Enhancement. Additionally, subtitles ending with numbers or special characters may be truncated early due to wav2vac's inability to map numeric characters (e.g., "1") to their spoken form ("one").

  2. Using weaker models can lead to errors during intermediate processes due to strict JSON format requirements for responses. If this error occurs, please delete the output folder and retry with a different LLM, otherwise repeated execution will read the previous erroneous response causing the same error.

  3. The dubbing feature may not be 100% perfect due to differences in speech rates and intonation between languages, as well as the impact of the translation step. However, this project has implemented extensive engineering processing for speech rates to ensure the best possible dubbing results.

  4. Multilingual video transcription recognition will only retain the main language. This is because whisperX uses a specialized model for a single language when forcibly aligning word-level subtitles, and will delete unrecognized languages.

  5. Cannot dub multiple characters separately, as whisperX's speaker distinction capability is not sufficiently reliable.

๐Ÿ“„ License

This project is licensed under the Apache 2.0 License. Special thanks to the following open source projects for their contributions:

whisperX (opens in a new tab), yt-dlp (opens in a new tab), json_repair (opens in a new tab), BELLE (opens in a new tab)

๐Ÿ“ฌ Contact Us

โญ Star History

Star History Chart (opens in a new tab)


If you find VideoLingo helpful, please give us a โญ๏ธ!


2024 ยฉ VideoLingo.