Spanish English French German Italian Portuguese
Social Marketing
HomeBig TechsAdobeAdobe is also working on generative video

Adobe is also working on generative video

Adobe says it is building an artificial intelligence model to generate videos. But it doesn't reveal when exactly this model will be released, or much about it, just the "fact" that it exists.

Offered as a sort of response to OpenAI's Sora, Google's Image 2, and models of the growing number of startups in the nascent generative AI video space, Adobe's model, a part of the growing Firefly family of generative AI products from the company, will make its way into Premiere Pro, Adobe's flagship video editing suite, and arrive later this year, Adobe says.

Like many generative AI video tools today, Adobe's model creates images from scratch (either a message or reference images) and powers three new features in Premiere Pro: adding objects, removing objects, and generative extension.

These features are self-explanatory.

Adding objects allows users to select a segment of a video clip (for example, the top third or bottom left corner) and enter a prompt to insert objects within that segment. In a briefing, an Adobe spokesperson showed a still image of a real-world briefcase full of diamonds generated by Adobe's model.

Image: Diamonds generated by AI, Adobe.

Object removal removes elements from the videos, such as microphones on an extension or pole or coffee cups in the background of a video.

Object removal with AI. It can be seen that the results are not completely perfect. Image: Adobe

As for the generative extension, it adds a few frames to the beginning or end of a clip (unfortunately, Adobe didn't say how many frames). Generative extension is not intended to create entire scenes, but rather to add intermediate frames to sync with a soundtrack or preserve a shot to give it additional rhythm, for example to add emotional tone.

Image: Adobe

To address the fear of deepfakes that inevitably arises around generative AI tools like these, Adobe says it will include content credentials (metadata for identifying AI-generated media) to Premiere. Content credentials, a media provenance standard that Adobe supports through its Content Authenticity Initiative, were already in Photoshop and were a component of Adobe's Firefly imaging models. In Premiere, they will indicate not only what content was generated by AI but also what AI model was used to generate it.

I asked Adobe what data (images, videos, etc.) was used to train the model. The company did not say whether or how (or if) it is compensating contributors to the data set.

Last week, Bloomberg, citing sources familiar with the matter, reported that Adobe pays photographers and artists on its stock media platform, Adobe Stock, up to $120 for submitting short video clips to train its video generation model. Payment is said to range from around $2,62 per minute of video to around $7,25 per minute, depending on the presentation, with higher quality images commanding correspondingly higher rates.

That would be a departure from Adobe's current agreement with Adobe Stock artists and photographers whose work it is using to train its image generation models. The company pays those contributors an annual bonus, not a one-time fee, depending on the volume of content they have in stock and how it is used, although it is a bonus subject to an opaque formula and not guaranteed from year to year.

Bloomberg's reporting, if accurate, shows an approach in stark contrast to that of generative AI video rivals like OpenAI, which has repeatedly it's been said which has extracted data from publicly available websites, including YouTube videos, to train its models. YouTube CEO Neal Mohan recently said that using YouTube videos to train OpenAI's text-to-video generator would be a violation of the platform's terms of service, highlighting the legal weakness of YouTube's fair use argument. OpenAI et al.

Companies, including OpenAI, are being sued over allegations that they are violating copyright law by training their AI on copyrighted content without providing credit or paying the owners. Adobe appears intent on avoiding this circumstance, like its sometime AI-generative competition, Shutterstock and Getty Images (which also have agreements to license model training data), and, with its intellectual property compensation policy, positioning itself as a verifiably “secure” option for enterprise customers.

As for payment, Adobe isn't saying how much it will cost customers to use the upcoming video rendering features in Premiere; Prices are presumably still being discussed, but the company revealed that the payment scheme will follow the generative credit system established with its first Firefly models.

For customers with a paid Adobe Creative Cloud subscription, generative credits renew each month, with allocations ranging from 25 to 1000 per month, depending on the plan. More complex workloads (for example, images generated at higher resolutions or multiple image generations) require more credits, as a general rule.

The big question being asked is: will Adobe's AI-powered video features be really contributions of differential value according to its cost?

Firefly imaging models so far have been widely ridiculed as disappointing and flawed compared to Midjourney, OpenAI's DALL-E 3, and other competing tools. The lack of a release timeline on the video model doesn't instill much confidence that the same fate will be avoided. Neither does the fact that Adobe refused to show live demonstrations of adding and removing objects and generative extension, insisting instead on a pre-recorded video.

Perhaps to hedge its bets, Adobe says it's in talks with third-party vendors about integrating its video generation models into Premiere, as well as powering tools like the generative extension and more.

One of those providers is OpenAI.

Adobe says it is collaborating with OpenAI to find ways to incorporate Sora into the Premiere workflow. A partnership with OpenAI makes sense given the AI ​​startup's capabilities in his proposals to Hollywood recently, it is telling that OpenAI CTO Mira Murati will be attending the Cannes Film Festival this year. Other early partners include Pika, a startup that creates AI tools for generating and editing videos, and Runway, which was one of the first vendors on the market with a generative video model. .

An Adobe spokesperson said the company would be open to working with others in the future.

Now, to be very clear, these integrations are more of a thought experiment than a working product at present. Adobe repeatedly stressed that they are in an “early preview” and “research” rather than something customers can expect to play with any time soon.

And that's the general tone of Adobe's generative video product.

Adobe is clearly trying to signal with these ads that it is thinking about generative video, if only in the preliminary sense. It would be foolish not to: to get caught unprepared in the generative AI race is to risk losing a valuable new potential revenue stream, assuming the economics ultimately work out in Adobe's favor. After all, AI models are expensive to train, run, and distribute.

But, frankly, the concepts it shows are not very convincing. With Sora at large and surely more innovations on the way, the company has a lot to prove.

RELATED

SUBSCRIBE TO TRPLANE.COM

Publish on TRPlane.com

If you have an interesting story about transformation, IT, digital, etc. that can be found on TRPlane.com, please send it to us and we will share it with the entire Community.

MORE PUBLICATIONS

Enable notifications OK No thanks