
Luma Labs, renowned for its innovations in AI video generation, has unveiled a landmark addition to its "Dream Machine" platform.
The new tool, named "Modify Video," promises to revolutionize the way people interact with their footage, whether it's professionally shot clips or even old home videos.
This new technology allows users to reimagine and alter scenes without the need for costly reshoots or complex animation processes.
The company stated that the tool preserves the core elements of the original material—such as actor movements, camera framing, and timing—while giving users the freedom to change any other aspect they desire.
From Subtle Tweaks to a Complete Overhaul
"Modify Video" offers extensive capabilities, ranging from subtle adjustments like changing a character's clothing or giving furniture a different texture, to complete transformations of the entire scene.
According to Luma Labs, the tool's power lies in its ability to understand and analyze the fundamental elements of any video clip.
It extracts full-body motion data, facial expressions, and even lip-syncing, then uses this data to guide new characters or elements within the same scene.
Introducing Modify Video. Reimagine any video. Shoot it in post with director-grade control over style, character, and setting. Restyle expressive performances, swap entire worlds, or redesign the frame to your vision. Shoot once. Shape infinitely. pic.twitter.com/9yH7fu5WFV
— Luma AI (@LumaLabsAI) June 4, 2025
For example, you can change a person's outfit in a video or simply alter their shirt color to ensure visual continuity between scenes.
You could even transform a friend rolling on the floor into an astronaut floating in outer space.

It goes even further, allowing for modifications to specific background elements, like adding a strange object flying in the sky or changing the room's furniture, without affecting the rest of the shot.
Luma AI noted that these capabilities are built on a combination of advanced motion and performance capture, AI-driven design, and what the company calls "structured presets."
To facilitate this process, the company introduced three levels of modification strength: "Adhere," "Flex," and "Reimagine."
The "Adhere" setting focuses on fine-tuned changes, such as adjusting clothing or altering the material of surfaces.
"Flex" allows for more significant modifications, including changes to the video's style, lighting, and other details.
In contrast, "Reimagine" grants complete freedom to reshape everything in the video, such as transporting the scene to another world, turning people into cartoon animals, or even placing a person on a skateboard into a race of futuristic hoverboards.
Leveraging Images, References, and Competitor Comparison
The process isn't limited to text prompts. Users can also leverage reference images and select specific frames from the video to guide the modification, making the process more intuitive and flexible.
Luma AI asserts that its technology outperforms competing tools, such as those from "Runway" and "Pika," particularly in maintaining performance accuracy.
The tool preserves the actor's body language, facial expressions, and lip-syncing, making the final results look cohesive and organic.
"Modify Video" is designed to provide high-fidelity creative control over the entire timeline of a shot, using advanced performance cues like pose, facial expression, and scene structure to understand what to preserve and what to reimagine.
It also supports multiple resolutions, including 720p and 1080p formats, ensuring seamless integration with various projects.
How to Modify Videos with AI
To use "Modify Video," users need to access Dream Machine through a paid subscription.
Access Link: https://dream-machine.lumalabs.ai
The process begins by uploading a video clip (currently with a 10-second input limit, with the ability to generate up to 30 seconds of output), selecting the desired transformation level (Adhere, Flex, Reimagine), and providing a text prompt describing the changes.
It's worth noting that modified clips retain their original audio, with the option to add a new audio track.
The company also mentioned that the tool will be rolled out to the Luma Labs iOS app very soon.
For best results, Luma Labs emphasized the importance of giving positive and specific instructions.
Instead of describing what you don't want to see, such as "a woman wearing a shirt that isn't red," it's recommended to describe the desired vision directly, like "a woman wearing a blue shirt."
This direct approach helps the AI understand the goal clearly and execute it accurately, avoiding the ambiguity that can arise from negative commands.
Luma AI also advises testing ideas on short clips first. Additionally, stable shots and clean backgrounds help improve the quality of isolation and preserve the accuracy of the element being modified.
Current Limitations
On the other hand, the company has acknowledged some current limitations.
The most notable is that the tool handles a maximum of 10-second clips per input, which may require splitting longer scenes into parts.
Furthermore, the quality of the original video directly impacts the transformation quality, and highly shaky or low-resolution footage may limit the results.
Nevertheless, these features represent a significant step forward. Observers believe that while such tools may not replace large-scale film productions for now, they will be incredibly appealing for independent producers and amateur content creators.