Luma’s a16z-backed Dream Machine platform is getting Ray3 Modify, a model built for studios that need to enhance footage without reshooting. Editors feed the system either a continuous take or just the first and last frames of a shot. Ray3 Modify then generates the in-between sequence while locking to the original actor’s motion, eyeline, and timing.
Creators can also upload a character reference—costume, creature build, or branded mascot—and map it onto the human performer. Luma says the model keeps props, wardrobes, and lighting continuity intact, so teams can reimagine a scene in new environments or swap entire casts without rebuilding sets.
The release leans into persistent complaints that generative video lacks steering. CEO Amit Jain positioned Ray3 Modify as a “blend of real footage and AI expressivity” that lets cinematographers keep creative control: capture the performance onstage, then adjust location, wardrobe, or even weather conditions in Dream Machine post-production.
Ray3 Modify ships inside Dream Machine today with priority access for paying studios. The launch follows Luma’s $900 million raise led by Saudi Arabia’s Humain, which is funding a 2GW AI cluster the partners plan to stand up in the kingdom.
Competitors like Runway and Kling have teased similar in-context edit tools, but none yet offer start/end frame generation paired with character references in a single workflow. Luma is betting that tighter control wins over film and brand clients that want AI speed without surrendering performance fidelity.