Рет қаралды 189
I generated each image using text unrelated to the song. Or is it? Also, what IS God doing now?
EXPLANATION:
I created the animation for this song using DiffusionBee, by starting with this prompt:
candle, awe, fear, Zdzisław Beksiński oil on canvas
Scale : 4 | Steps : 25 | Img Width : 512 | Img Height : 512 | model_version : 1.4tf
I made multiple iterations of this, following the trail as it evolved using this basic prompt:
Scale : 7.5 | Steps : 25 | Image Strength : 0.3 | model_version : 1.4tf
candle, awe, fear, Zdzisław Beksiński oil on canvas
As other images arose from each iteration, I replaced "candle" with "person," and added "water" and then "fire". Eventually, "mountains" and "bird". Again, I chose those new nouns as they arose naturally from the imperfect iterations which happens with the Image Strength at .3.
I then upsized each selected image and opened Final Cut Pro. I put the song on the timeline and, after loading each image in order, watched as the images somehow magically found ways to illustrate the words. (Like matching up Pink Floyd with The Wizard of Oz).