Introduction
OpenAI led by its CEO Sam Altman just announced Sora AI, a text-to-video model that can create realistic and imaginative scenes from text instructions.
OpenAI has been leading the generative artificial intelligence industry since the launch of GPT. It changed text generation with ChatGPT, image generation with Dall-E, audio via Whisper, and now Sora its video generation model.
What is Sora AI & How Does it Work?
Sora AI is a diffusion model, which generates a video by starting with one that looks like a static noise and gradually transforms it by removing the noise over many steps. Similar to GPT models, Sora uses a transformer architecture that unlocks superior scaling performance.
They represent videos and images as collections of smaller units of data called patches, each of which is the same as a token in GPT. By unifying how they represent data, they were able to train diffusion transformers on a wider range of visual data than was possible before, spanning different durations, resolutions, and aspect ratios.
Sora builds on past research in DALL·E and GPT models and uses the recaptioning technique from DALL·E 3, which involves generating highly descriptive captions for the visual training data. As a result, it allows the model to follow the user’s text instructions faithfully in the generated video.
Interested in AI? Browse 100’s of AI Tools
Capabilities of OpenAI’s Sora AI Model
1. Generate complex scenes with multiple characters
Prompt
A beautiful homemade video showing the people of Lagos, Nigeria in the year 2056. Shot with a mobile phone camera.
2. Specific types of motion
Prompt
The camera follows behind a white vintage SUV with a black roof rack as it speeds up a steep dirt road surrounded by pine trees on a steep mountain slope, dust kicks up from it’s tires, the sunlight shines on the SUV as it speeds along the dirt road, casting a warm glow over the scene. The dirt road curves gently into the distance, with no other cars or vehicles in sight. The trees on either side of the road are redwoods, with patches of greenery scattered throughout. The car is seen from the rear following the curve with ease, making it seem as if it is on a rugged drive through the rugged terrain. The dirt road itself is surrounded by steep hills and mountains, with a clear blue sky above with wispy clouds.
3. Accurate details of the subject and background
Prompt
A cartoon kangaroo disco dances.
4. Different styles of videography
Cinematic Style
Animation Style
Closeup Style
The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.
It has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions. It is also capable of generating entire videos all at once or extending generated videos to make them longer.
In addition to being able to generate a video from text instructions, it can take an existing still image and generate a video from it, animating the image’s contents with accuracy and attention to small details. The model can also take an existing video and extend it or fill in missing frames, making its capabilities truly endless.
Weakness of OpenAI’s Sora AI Model
Like everything, OpenAI’s Sora also has some weaknesses. It may struggle with accurately simulating the physics of a complex scene, and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark.
The model may also confuse spatial details of a prompt, for example, mixing up left and right, and may struggle with precise descriptions of events that take place over time, like following a specific camera trajectory. As reported by OpenAI.
How to use Sora AI?
OpenAI has announced that Sora AI will not be available to use for now, it will be released at a later date to the general public. However, the date is unspecified.
When Sora AI is available to use, users can expect a similar interface to Dall-E to generate videos with Sora AI. Where they can generate videos from texts, images and videos.
Who can access Sora AI?
Sora AI is currently only made available to Red Teamers – domain experts in areas like misinformation, hateful content, and bias — who will be adversarially testing the model to assess any critical areas of harm and risk that could arise with the use of this model.
OpenAI has also granted access to several visual artists, designers, and filmmakers to gain feedback on how to advance the model to be most helpful for creatives.
What do we think of Sora AI?
In our opinion, OpenAI has made a big leap in the artificial intelligence industry once again, not only will the Sora model allow for easy and cost-effective video generation but it has the ability to break free from the laws of nature which will allow for increased creativity amongst individuals.
The videos that are made available by OpenAI are also mesmerizing and unlike any other model before them. Not only does it produce high-quality animations but also has nailed the realism. We at The AI Jargon can’t wait to play with Sora and witness its capabilities for ourselves. How about you? Leave a comment down below!
All video credits go to OpenAI Sora.
FAQs
Is Sora AI available to use?
Sora AI is currently not available to use for the general public, it is currently only accessible to Red Teamers to assess critical areas of harm and risks.
What is Sora AI?
Sora is a video generation model developed by OpenAI the company behind other models such as GPT, Dall-E, and Whisper.
What does Sora AI do?
Sora allows users to generate videos using text instructions, turn a still image into an animated video, and even fill or extend existing videos.