Window Seat, the first A.I. feature film!

Started by hooroo, July 26, 2023, 06:15:24 AM

0 Members and 1 Guest are viewing this topic.

hooroo

Hey guys I have just brought out the first A.I. movie. At 61 minutes it crossed the feature threshold. It's called Window Seat.

"A man sees his high school bully on a plane trip, and is pushed into a battle of wits with him. Meanwhile, his company and personal life are thrust into the national spotlight. Will he be able to outsmart his former tormentor and expose him, or will he become a victim, once again?"

Here is the official trailer:



It was screened for critics and is currently pushing to festivals. PM me if you want to watch it or any questions.

If you remember I posted about my first film Aimy in a Cage here in 2015: https://xixax.com/index.php?topic=13349.0

Aimy budget: $500,000, crew of 30, 20 union actors, 1.5 years of post production.
Window Seat: $190 budget, crew of 1, no human actors, 3 weeks start to finish.

I am looking into a roto filter version as well. The dream is that one day somebody can make a full blown live action epic in A.I. from their basement... so I had the mentality to treat this as much as real filmmaking as possible.

Each line of dialogue took about 25-50 takes. It took 4,000 4s videos to produce it, that is about 1 in 50 previews, and 1 in 16 generations, made it in the film.

The hate has been immense, but considering how disruptive A.I. is to tradfilm, it is to be expected.

hooroo

We were banned on reddit for posting the trailer. A.I. is extremely controversial.

But remember, the ones who hate it will also never finance your feature. They want to keep art for the rich, an endless nepotistic aristocracy.

Poster I had done for it: https://imgur.com/a/pUYY7o9


WorldForgot

A.I. has been a lot of fun to mess around in. I still consider this tech to be in the infancy stage, but it's neat what's coming out.

Which software did you use to generate most images? Was there a combination of softwares used? How did you keep prompts intact enough that the software would repeatedly generate consistent images (ie, keeping Characters looking as they did a few scenes prior?)

hooroo

I used Runway V2. There were so many rapid developments during the making of this (it begun June 20 only a month ago)

At first it was an entirely different algorithm. This is why the first 5 minutes of the movie are more claymation and choppy.

It also was a lottery. You have like 100 credits for 20 bucks. You are forced to use anything they give you.

So you had to be resourceful and use what you get.

About a week in, they release previews, so you can actually choose from 4 possibilities now what you generate, so I got more options.

But half way through the movie, because it is impossible to finish this at this point, they make unlimited credits for $100. So I could now generate 100s of videos until I get one where the lip is moving.

I could also experiment, like I found when I type they are barefoot, it's not going to show them barefoot but it will show them with relaxed body language. If you wonder why they are holding a spoon in a couple shots, it is because when I told the generator the character is holding a spoon, it has to track the spoon through the scene, so the a.i. is paying a lot more mind to their hand motion.

If the lip doesn't move but the hands are, that still registers to the viewer they are talking.

Then on the very last day of making the movie, they released a feature you can animate stills. This was a huge help because shots that I couldn't get any lip motion, I could feed the still frames of those shots into the generator, and after 10-20 tries I could get little lip motion. This saved like a dozen shots.

There are some shots that even with that still generator, with 100s of tries, I still could not get any lip motion, such as the stewardess, and the Australian woman. But then they gave a shot that her face is behind a glass of beer, which obscures the line she is saying, and her head raises up and she's smiling. 

This element of randomness to the A.I. is exactly like live action filmmaking, but due to the infinite quantity of options, it is turbo charged. The A.I. is giving so many iconic moments and reactions I could never get in live action filmmaking.

re: the character consistency, I think they have a finite number of actors locked in the algorithm. like 'guy with glasses' would always be the same guy.

The movie was only possible since I got incredible machine generated voices from the three main cast. I would not have made this without those voices.

WorldForgot

That is comprehensive and fascinating hooroo! I generated some loops from stills in stable diffusion ai like two months back, didn't like how they turned out, and the developments even since then have been massive. It seems all the openai models run on this Credits system. I know ChatGPT and Cladude do too to an extent.

I look forward to watching it and learning more of these tools myself.

WorldForgot

Also @hooroo I'd be all about that private link to watch the full thing :3

hooroo

Had to take down the public link; PM if anyone wants to view, since it's about to go to festivals and the like any feedback will help.