The Video Composer is the page and process that guides you through the creation of a full video. Once you have filled out the settings on this page an AI agent that we refer to as the Video Composer Agent will generate all the required media and put it all together for you. Right now we support videos up to 3 minutes long.
The Editor is our video editing interface where you can directly generate video clips, voiceovers, and sound effects. You can do everything the Video Composer does but with more precise control. It's perfect for fine-tuning prompts and adjusting videos created by the Video Composer.
The Editor Agent is our AI assistant that helps you in the Editor. It can help write prompts and scripts and automate video creation. While helpful, it's not as comprehensive as the Video Composer for generating complete videos. The Editor Agent is fantastic at generating variations and iterations of visuals, sound effects, and voiceovers. When the Editor Agent generates visuals it will use the current orientation of the project.
AI models are trained by various 3rd party companies to generate media for us. We use 3rd party models for video clips, voiceovers, and sound effects. You are able to pick and choose which model you want to try out for different media types in the Editor. Many AI artists grow to love a specific model for their desired use cases.
The Video Composer is a great place for quickly generating full videos. This is especially powerful when your video needs voiceovers, music, sound effects, and visuals. The Video Composer does this all! The Editor is better for quickly iterating on specific visuals and prompts. It is extremely useful for users that like to generate 10 of the same visual or sound effect and then pick their favorite one. An extremely powerful workflow is to generate a video using the Video Composer and then tune it to perfection using the Editor.
The answer, of course, is: it depends. It's like asking how much it costs to make a movie. Are you shooting a blockbuster with Scarlett Johansson or a low-budget short film on an iPhone? With AIVideo.com, the same logic applies. You can use premium AI video models like Veo 2, which cost 300 credits per clip, or faster, cheaper AI image models that cost just 1 credit. So in theory, a minute of video made in the editor could cost as little as 1 credit - or up to thousands of credits, depending on the model you use and how many clips you generate to get the perfect output. That said, our end-to-end video system typically runs anywhere from 60 to 180 credits per minute, depending on the media types you allow it to use.
A request is a simple text prompt that you give to the Video Composer. For example: 'Generate me a video about the history of sailboats'. A script is a more detailed outline of the video you want to create where you can specify specific speech, visuals, and sounds.
The Video Composer Agent is capable of handling scripts in many formats, but if you are struggling to get the results you want, structuring your script in the following format may help. Define any major directions you want in the video at the top by defining a video overview. For example: 'Video Overview: Ensure that the video uses cinematic aerial shots of the sailboats' Put text that you want spoken verbatim in quotes and specify who is speaking. For example: Narrator: 'Welcome to the history of sailboats' Put specific visual descriptions in brackets. For example: [A sailboat sailing on the ocean] Put specific sound effects in braces. For example: {Wind blowing through the sails}
- For videos where you aren't sure what the visuals should be, try just providing speech verbatim and letting the agent come up with the visuals. Many scripts that people generate with chat agents such as chatGPT have no concept of script timing and in fact provide script directions that will only harm the flow of your video. - Prompt effectively for AI generations. Learning how to prompt effectively for AI generations is a skill that takes time to develop. Understanding what ai video generation is good at and what it is not good at is key to creating effective prompts. The editor is a great place to practice and develop your prompt skills before attempting to create a full video with the Video Composer. - Prompting Camera Motion: If the videos are feeling flat try prompting for more camera motion. Learning camera control terms such as "Panning", "Zooming", "Tilting", and "Rolling" can help your videos feel more dynamic
The agent strategy determines how we use different AI models to create your video. - Consistent Style & Characters This strategy uses Photon Luma to storyboard your entire video first, ensuring consistent style and characters. These images are then converted to video using Kling 1.6 Image to Video. It can automatically generate characters or use ones you provide. While offering excellent style consistency, it may occasionally produce unrealistic visuals. - Realistic & Dynamic This strategy uses Kling 1.6 Text to Video and Recraft V3 directly, producing more realistic results with better camera motion. While style consistency may vary, you can still request specific consistent elements through Video Composer settings.