How will OpenAI’s Sora Impact the World?

AI has made a big impact in the world recently with tools like ChatGPT and DALL E, but now OpenAI has just revealed Sora. It’s their newest AI model where you can type in any prompt, and it will output a completely AI generated video.

Right now, Sora isn’t open to the public yet, and only the people part of the red team, which are a group of people who test the product, are able to access the tool. The videos don’t have any audio, and they can only be up to a minute long. Sooner or later, however, Sora will be available for everyone to use, but how will this impact all of us, especially people in the movie and animation industry?

OpenAI has a couple of samples up on their website of what Sora is capable of. At first glance, you might not be able to tell that some of these videos are AI generated, but once you start looking at them a little bit closer, then it becomes obvious that some things look off. One example is of a group of people gathered in a room to celebrate a birthday. When you watch it the first time, nothing seems off, but if you watch it again then you might see some weird things.

The people’s hands are not moving correctly when they start clapping, and the flames on each of the candles are not moving in a consistent direction. Even though there are some small noticeable flaws, other parts of the same video still look realistic. If you look at more of the videos, there are other ones that are much better, but there are also ones that look worse. For example, there is one of multiple wolf pups moving around, but they’re walking through each other and some of them spawn into the scene.

Right now, Sora seems like it can make decently convincing videos from the different prompts it has been given so far, and it’s only going to improve more from here. Not too long from now, everyone might be able to make completely new videos with minimal flaws just by typing in a few words. Does this mean that all animators, directors, actors, and online content creators be put out of a job because of AI? I don’t think so.

No matter how good AI gets, something will always just feel kind of off. It’s a good tool for creating stock footage quickly for specific things, but something like a whole movie or show would just look and feel uncanny to watch the whole way through. Also, there would probably be a lot more mistakes in an entirely AI generated movie rather than a short video.

The main problem with this tool is how it could be used to spread misinformation. Someone could post an AI generated video of someone committing a crime they never did, someone could or post a generated video of someone saying something they never did. If people don’t fact check the video, or if they don’t watch it more than once to spot things that might indicate that it was made using AI, then they might just continue to spread that fake information. You might be saying that this is no different than someone sharing a fake article or sharing an edited photo, and you’re not wrong, but videos can be way more convincing and harder to prove as fake.

In conclusion, OpenAI’s Sora is both cool and exciting, but scary and terrifying at the same time. It could be used in many creative ways, but it could also be used to make and spread more convincing misinformation.