As discussed in mind meeting with Anna, I am planning to make an opening sequence using AI generated images along with AI narration to establish the context of the films.
The purpose of this film Will be to immerse the viewer within the world. To do this, I plan to train a stylegan of images of both cities and nature and then make a latent walk film that slowly progresses from scenes of nature to colossal cityscapes. I then plan I’m creating an AI depth map for the film which I will use to displace the film, which can be used to create hard shadows. This rise of concrete and glass out from the natural world until no more nature can be seen Will establish that this new world is in entirely artificial. When the audience then sees the characters set within natural worlds, it will raise the question of wether the events are taking place within and imagined world.




To begin with, I modified an image scraper script I have written with the Bing API and python to download the images I will use to train the StyleGan. Given a list of search terms, the scraper worked until completed, saving me the incredibly boring task of downloading the images myself. The scraped images are usually relevant, although many irrelevant images also are downloaded, for example the search ‘jungle sunset’ also returns many images of people in the jungle, which will only confuse the stylegan

The images I have scraped vary widly in subject. Although the stylegan will struggle with this, I think the results may be extraordinary