How to Run Wan 2.2 in ComfyUI: Full AI Video Workflow (Works on 8GB VRAM)
A complete tutorial showing how to run Wan 2.2 in ComfyUI using just 8GB of VRAM. Learn how to install the models, set up the workflow, and generate your first AI video from a single image.
The release of Wan 2.2, a powerful open-source image-to-video AI model, has made cinematic video generation more accessible than ever. While many think AI video tools demand high-end GPUs, this guide shows you how to install and run Wan 2.2 using just 8GB of VRAM—all inside ComfyUI.
This article breaks down the process into easy-to-follow steps, from downloading model files to testing your first image-to-video workflow. Whether you're an AI creator, developer, or just exploring generative tools, this setup gives you full creative power without the need for expensive hardware. You’ll not only learn how to get everything running but also how to fix common errors and optimize the output for your system.
Downloading the Wan 2.2 Model and Dependencies
To get started, you’ll need the following components:
Wan 2.2 GGUF model files (5B or 14B)
Lightx2v LoRA for video motion guidance
AutoVAE KL-f8 (VAE file)
UMT5 Text Encoder for prompt processing
All of these are available through Hugging Face and GitHub:
Wan 2.2 GGUF models: https://huggingface.co/bullerwins/Wan2.2-I2V-A14B-GGUF
Lightx2v LoRA: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Lightx2v
Text Encoders & VAE: https://github.com/Comfy-Org/Wan-2.2-ComfyUI
Make sure to select the GGUF file that matches your system: use 5B for lower VRAM (like 8GB), or 14B for high-end GPUs.
These files are crucial to achieving smooth motion and coherent visual output. The LoRA specifically adds animation behavior, while the UMT5 encoder processes the prompt in a way the model can understand. Without these, your animation may either fail or look static.
Organizing the File Structure in ComfyUI
After downloading the required files, you'll need to place them into the correct ComfyUI directories:
GGUF models go into: ComfyUI/models/unet
LoRA files into: ComfyUI/models/lora
VAE files into: ComfyUI/models/vae
Text encoders into: ComfyUI/models/text_encoders
Double-check the filenames and extensions. The model loader in ComfyUI will not recognize files if they are placed incorrectly or have wrong extensions. Make sure you're using unzipped, compatible files from the correct repositories.
This step may seem straightforward, but it's crucial for ensuring that the system loads correctly and runs without errors. Proper organization avoids issues like models not appearing in dropdown menus or the workflow failing to initiate.
Loading the Hugging Face Workflow
To streamline the process, use an official Hugging Face workflow for Wan 2.2:
Workflow download: https://huggingface.co/datasets/theaidealab/workflows/tree/main
Import the workflow into ComfyUI by dragging the .json file into the node editor. This will auto-populate the model loader, LoRA, image input, and video output nodes.
Customize the nodes as needed:
Set LoRA strength to between 0.8 - 1.0
Adjust frame count and motion settings for your output
Select appropriate resolution settings for your hardware limits
The Hugging Face workflow simplifies what would otherwise be a complex manual setup of nodes, connections, and parameter tuning. Using it ensures consistency across results and saves hours of guesswork.
Running Your First Image-to-Video Test
For this guide, a wolf image was used with a "stormy atmosphere" prompt. Here's a quick breakdown of the process:
Load the wolf image into the input node
Type your text prompt (e.g., "howling wolf in a thunderstorm")
Set frame duration and output resolution
Hit the execute button
The output? A short, AI-generated cinematic animation with dynamic weather motion—rendered directly from a still image.
Results will vary depending on prompt phrasing, LoRA weight, and model settings, but the flexibility offered by ComfyUI allows you to iterate quickly. You can reuse the same base image with different prompts or even tweak frame durations to create different pacing effects.
Fixing the SageAttention Error with Gemini CLI
If you encounter this error:
ModuleNotFoundError: No module named 'sageattention'
Don’t worry—it’s a known dependency. Instead of troubleshooting manually, use Gemini CLI:
Open Gemini CLI
Paste the full Python error log
Ask: "Fix this error and install the right module"
Gemini will generate a pip command or even install directly from GitHub. In our case, it automatically installed sageattention from its repository and resolved the issue.
This AI-assisted troubleshooting saves time and eliminates the guesswork involved in resolving obscure Python module errors. It’s especially useful if you're not comfortable manually digging through package dependencies.
Performance on 8GB VRAM and Optimization Tips
Yes, it works—and pretty well. Here’s what to keep in mind when working with limited hardware:
Lower output resolution to 512x512
Reduce frame count if you experience memory bottlenecks
Close background applications before generating video
Monitor VRAM usage during processing
These optimizations allow you to maintain performance while producing quality results. The workflow remains responsive and the output visually impressive despite lower hardware specs. Even with limited resources, you can create something that looks like it came from a much more powerful system.
Watch the Full Video Walkthrough
For those who prefer visual learning, the entire setup and workflow execution is available in this video:
Wan 2.2 Full Installation Guide for ComfyUI Step-By-Step | 8GB Vram
🎬 YouTube Tutorial: https://youtu.be/7hUO6KhUsvQ
You'll see exactly where to place each file, how to fix the error using Gemini CLI, and what the final AI-generated video looks like. The video also includes a detailed explanation of how each node works within the ComfyUI environment.
Final Thoughts
Wan 2.2 opens up image-to-video AI generation to a much wider audience—and pairing it with ComfyUI makes the process even more visual and accessible. This guide proves you don’t need a top-tier GPU to explore cutting-edge generative tools.
By carefully organizing files, using official workflows, and leveraging tools like Gemini CLI for quick fixes, anyone can achieve impressive AI animation output. If you're interested in generative video, open-source models, or just want to create something cinematic from a still image, this workflow is a fantastic place to start.
As AI models continue to evolve, being able to run them efficiently on modest hardware gives more creators a chance to participate in the future of digital storytelling. So, whether you're testing a new workflow or launching your next project, Wan 2.2 with ComfyUI is more than capable of helping you bring your vision to life.



Comments
There are no comments for this story
Be the first to respond and start the conversation.