Two years ago I generated the first scene in my first novel in midjourney, as a static image, then got the Runway app to bring Roisin to life, this was the result. The post about it was here and later on on keeping a consistent character in the still image generation here
It was good and somewhat amazing but this week midjourney added a powerful new video generating model, and the same image gave this even more stunning result.
The new one has much more consistency and intricate details, down in the bottom right a silver zip toggle sways as she types. The music was not AI just added on a youtube short. Midjourney has the stated goal of getting to generate worlds i.e. metaverse. This seems to be at least as good as Google’s Veo3 that we just got here. It’s much harder to spot the progress in the LLMs, but this is a good visual indication of the speed and improvement
Also the flying penguins get a look in too. It’s been tricky to get penguins to fly so far but Google Veo3 did this one, the desert island with a soundtrack generated too. Hard to see they are penguins but they are 🙂
Whilst that looked really good the ultimate one came from midjourney video again. Look at this 🙂 (Sound added from YouTube short editor
Leave a Reply