Search
AI-powered search, human-powered content.
scroll to top arrow or icon

We’re Sora-ing, flying

February 17, 2024: OpenAI Sora logo on a smartphone display

February 17, 2024: OpenAI Sora logo on a smartphone display

IMAGO/Bihlmayerfotografie via Reuters Connect

OpenAI, the buzzy startup behind the ChatGPT chatbot, has begun previewing its next tool: Sora. Just like OpenAI’s DALL-E allows users to type out a text prompt and generate an image, Sora will give customers the same ability with video.


Want a cinematic clip of dinosaurs walking through Central Park? Sure. How about kangaroos hopping around Mars? Why not? These are the kinds of imaginative things that Sora can theoretically generate with just a short prompt. The software has only been tested by a select group of people, and the reviews so far are mixed. It’s groundbreaking but often struggles with things like scale and glitchiness.

AI-generated images have already posed serious problems, including the spread of photorealistic deep fake pornography and convincing-but-fake political images. (For example, Florida Gov. Ron DeSantis’ presidential campaign used AI-generated images of former President Donald Trump hugging Anthony Fauci in a video, and the Republican National Committee did something similar with fake images of Joe Biden.)

While users may not yet have access to movie-quality video generators, they soon might — something that’ll almost certainly supercharge the issues presented by AI-generated images. The World Economic Forum recently named disinformation, especially that caused by artificial intelligence, as the biggest global short-term risk. “Misinformation and disinformation may radically disrupt electoral processes in several economies over the next two years,” according to the WEF. “A growing distrust of information, as well as media and governments as sources, will deepen polarized views – a vicious cycle that could trigger civil unrest and possibly confrontation.”

Eurasia Group, GZERO’s parent company, also named “Ungoverned AI” as one of its Top Risks for 2024. “In a year when four billion people head to the polls, generative AI will be used by domestic and foreign actors — notably Russia — to influence electoral campaigns, stoke division, undermine trust in democracy, and sow political chaos on an unprecedented scale,” according to the report. “A crisis in global democracy is today more likely to be precipitated by AI-created and algorithm-driven disinformation than any other factor.”

GZEROMEDIA

Subscribe to GZERO's daily newsletter