Pika 1.0 makes generating videos with AI easier and accessible

Anna Vod · January 4, 2024 · Short URL: https://vator.tv/n/57b7

The technology is not perfect, but it can create pretty content that's modifiable

These days, most tech jobs have some sort of "AI knowledge" qualifier in the description. Of course, if you're in tech, understanding AI seems to be a must-have. It's even the case for visual artists. Admittedly, this requirement for visual artists seems excessive since how tough can it be to write a detailed prompt to generate the image or video you need. This was my view, until I tried this myself. In my experiment, I learned firsthand that the machine often gives off a visual that’s totally unpredictable and far from what you had in mind when writing that prompt.

The founders of Pika Labs came across the same issue and decided to change that. Now, Pika Labs’ text-to-video platform Pika 1.0 is available to all, free of charge. And it aims to make AI video generating easy to use for anyone.

Ultimately, the machine creates video from a written prompt by merging natural language processing (NLP) and video generation algorithms, as Pika explained in a statement. Starting with analyzing and segmenting a given text, AI creates images and scenes using reference databases and previously learned patterns to then assemble these pieces into one video.

Emerged just six months ago, Pika Labs scored $55 million across three fundraises. The latest round, totaling $35 million, a month ago and was led by Lightspeed Venture Partners, a Menlo Park-based investor in buy-now-pay-later fintech Affirm, stock trading app Webull, and payments processing platform Stripe.

Other participating equity firms were Homebrew in Burlingame, CA, Conviction Capital in London, SV Angel in San Francisco, plus AI news platform Ben’s Bites. Angel investors in Pika were Adam D’Angelo (Quora), Alex Chung (Giphy), and Nat Friedman (GitHub).

Pika Labs positions itself as a nonprofit organization focused on applying AI tools “to solve real-world problems.” It was started by Demi Guo and Chenlin Meng after they attempted to make a movie using generative AI with existing tools and failed to achieve a winning result. In fact, the two dropped out of Stanford, where they were Ph. D. students at the AI Lab, to launch their own easy-to-use AI video generator, as CEO Guo told Forbes in late November.

Now, the web application is used to make millions of videos per week, and Pika Labs commands a valuation of $200 million.

The company sees the application of the technology in education, marketing, filmmaking, and virtual reality. But there is no end to its potential uses, I’d say. It’s just a start.

And at the start, the technology is far from perfect: objects in a Pika-generated video merge into each other and lose parts, people move unnaturally, inaccuracies appear here and there. It’s kind of like a distorted mirror in parts. The machine just refused to show people with limbs where they’re supposed to be, or it would outright ignore my prompt to show passengers inside a bus but would instead depict an empty bus again and again – and the bus itself looked more like a bus crash.

Clearly, I wouldn’t get that AI-tamer artist job.

However, I did learn something in the process that I’ll share here for fellow AI-generating strugglers. Rather than using words like “passengers” and “lady”, just say “people” and “woman.” The cover to this article shows a snapshot from the video to the prompt “A pretty woman going shopping, a highly-detailed busy street of a big city, nighttime, photo image quality.”

What’s nice about Pika is a limitless number of attempts at adjusting the same video to make it more accurate or add more details. You can modify the result by region, expand the canvas, and upload your own visual to be included. The background in the videos I got, as opposed to the main subject, looked good enough, and the AI picked up on clues like “black-and-white” and “anime-style.”

Among the challenges Pika Labs itself noted in text-to-video AI generating today are processing complex and abstract narratives and ensuring creative versatility.

As the text-to-video revolution is taking place, Pika Labs sees the next step up to be personalizing the technology – AI could adapt to a user’s preferences to generate content tailored for individual liking. Wouldn’t that be nifty.

Support VatorNews by Donating

Read more from our "Trends and news" series

More episodes