OpenAI revealed Sora to the world on February 15, 2024 by sharing a handful of remarkable AI generated videos and a research paper on X.
Sora wasn’t the first artificial intelligence video model, but it was the first to show such high levels of consistency, duration and photo realism.
While the output seems impressive, so far only videos generated by OpenAI staff have been shared on either X or TikTok, although some were made with prompts suggested by fans.
No date has been set yet for when the model will be made public, or what limitations will be placed on its output before it is integrated into a tool like ChatGPT.
What is OpenAI Sora?
(Image credit: OpenAI)
Sora is a generative video model, similar to the likes of Runway’s Gen-2, Pike Labs Pika 1.0 and Stable Video Diffusion from StabilityAI. It turns text, images or video into AI video content.
It is named for the Japanese word “sky,” which the company said was to show its “limitless creative potential.” One of the first clips showed two people walking through Tokyo in the snow.
Unlike some of the models that came before it, Sora appears to be much more capable, able to generate clips of up to one minute long and with consistent characters and motion.
What is the technology behind Sora?
(Image credit: OpenAI)
The technology behind Sora is an adapted version of the models built for DALL-E 3, OpenAI’s generative image platform but with additional features for fine-tuned control.
Sora is a diffusion transformer model, that is it marries the type of image generation model behind Stable Diffusion with the token-based generators powering ChatGPT.
A video is generated in a latent space and “denoised,” or formed in 3D patches and then put through a video decompressor to turn into a standard, human viewable output.
What data was Sora trained on?
(Image credit: OpenAI)
OpenAI says it trained its model on publicaly available videos, public domain content and copyrighted videos where it had purchased the licence in advance.
It hasn’t said exactly how many videos went into the training data and is unlikely to ever reveal that information. It is thought to be in the millions.
The company used a video-to-text engine to create captions and labels from ingested video files to further fine-tune Sora on real-world content.
Rumors and speculation suggest that OpenAI also made use of synthetic video content, such as that generated using Unreal Engine 5 as this would also give it information on the physics of the worlds inside the video clips it ingested.
Why did Sora surprise its developers?
Introducing Sora, our text-to-video model.Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. https://t.co/7j2JN27M3WPrompt: “Beautiful, snowy… pic.twitter.com/ruTEWn87vfFebruary 15, 2024 See more
Every large scale AI model has its quirks, behaving in unexpected ways or responding to prompts in a way that almost feels the opposite of what was intended. Sora is no different.
During the post-training run Tom Brooks, a Sora researcher said it seemed to work out how to create 3D graphics from its own dataset without any additional training.
Meanwhile, Bill Peebles, another researcher working on the model said it automatically created different video angles without being prompted — it assumed that was what was needed.
What about content restrictions and privacy?
(Image credit: OpenAI)
During training there were also red teamers and safety experts working to track, label and prohibit use cases for misinformation, hateful content and bias through adversarial testing.
There are also metadata tags within the generated videos to label it as made by AI and text classifiers that will check prompts don’t violate usage policies.
Like DALL-E 3, OpenAI says Sora will have a number of content restrictions before launch. This will include limits on generating images of real people.
This will also include a ban on generating videos showing extreme violence, sexual content, hateful imagery, celebrity likeness or the IP of others such as logos and products. None of this is possible easily with DALL-E 3 and the same restrictions will apply.
How can I access Sora?
(Image credit: OpenAI)
OpenAI hasn’t set a release data for Sora yet, declaring it has more work to do on safety and security related to the model. It is expected some time in April or May.
It is most likely that Sora will be integrated into ChatGPT similar to DALL-E 3 rather than made available as a standalone product — although previous versions of DALL-E had their own page.
The model will also be available as an API where third-party developers can integrate its functionality into their own products, although that will come further down the line.
This already happens with DALL-E 3. For example you can use the OpenAI model within your own product to automatically create images, or, as is the case with the AI image platform NightCafe, offer your own interface to generate images with the model.
Source: www.tomsguide.com