Hey there! If you've been following Adobe's journey into generative AI, then you’re in for some exciting news. Adobe just took a huge step forward with its Firefly AI video model, and it's making its debut right inside Premiere Pro. So, if you’re into video editing, this is definitely something you’ll want to know about.
Let’s break down what this means for creatives like you and me.
The first tool that Adobe’s rolling out is called Generative Extend. Think of it as a nifty way to fix those annoying little problems with your footage, like when it’s just a bit too short. Maybe your scene ends a second or two too soon, or there’s a slight mistake in the shot—like someone’s eyes not quite looking in the right direction. Instead of reshooting, you can now extend the footage by up to two seconds, and voila—problem solved.
But here’s the catch: You can only extend clips by two seconds. So, it’s not for big fixes, but it can help smooth out those small issues. Oh, and it works in both 720p and 1080p at 24 FPS, which is solid for most projects.
The same tool can even extend some audio elements, but with a few limitations. For instance, you can extend sound effects or the “room tone” by up to ten seconds. However, don’t expect it to stretch spoken dialogue or music—that’s still a no-go for now.
Now, Adobe isn’t stopping there. They’ve also launched two cool AI-powered video generation tools for the web: Text-to-Video and Image-to-Video. These tools were first announced back in September and are now rolling out as a limited public beta on the Firefly web app.
Let me explain how they work:
One thing to keep in mind is that both tools are still in their early stages. The max length for generated clips is just five seconds, and the quality tops out at 720p at 24 FPS. So, don’t expect to create a full movie with this—yet!
If you’re wondering how Adobe compares to others in the AI video space, here’s the scoop: OpenAI’s Sora and Meta’s Movie Gen are both working on similar tech, but those aren’t publicly available just yet. OpenAI claims that Sora can generate videos up to a minute long while maintaining quality, but Adobe’s advantage is that its tools are here, now, and ready for you to use—at least in beta form.
Also, there’s something Adobe is proud of: their AI tools are “commercially safe.” This means they’ve been trained on content Adobe has permission to use, like Adobe Stock, rather than scraped from the internet without permission (as is the case with some other AI models).
One thing I love is that Adobe isn’t done yet. They’re working on speeding things up, promising that generation times will soon get faster. Right now, it takes about 90 seconds to generate a clip, but Adobe’s “turbo mode” should cut that down.
Another cool feature is that anything you create with Adobe’s AI video model comes with Content Credentials, meaning when you publish online, people can see how the content was created and edited. This transparency can be super important when dealing with AI-generated content, especially for commercial use.
The new Firefly AI video tools are rolling out as a limited public beta, so if you’re eager to give them a go, you can sign up for the waitlist. These tools are designed for video editors and creatives who want to experiment with AI while keeping things commercially safe.
And that’s pretty much the gist of it! Adobe is continuing to push the boundaries of what AI can do in the creative space, and if you’re a video editor or content creator, these tools might just become your new best friend.
Also click here to read relevant article.
By clicking "Accept", you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in improving your experience.