I decided to create a very short metaverse evolution explainer video, but using GenAI. It is based on looking at patterns, we have all got used to moving from paper maps to digital ones, with GPS and now fully instrumented digital twins with traffic and other information. That leap applies to many other use cases. It’s all metaverse – digital to physical and back again, people, machines and data. All on my own midjourney.com, runway.ml and luma.ai accounts. Also learn a lot more about how hard it can be to wrangle AI to what you really want, but it works đŸ™‚
What was mad about this was I generated my key images in midjourney and gave a couple of goes at runway that I was happy with (also spliced together and the talking soundtrack is runway), but a day after I had it where I wanted Luma.ai went live and I gave that a go. For a few of the scenes it was just much more what I needed. There is a point where you just have to hit publish, but these things just keep improving as a base tool, let alone the skills also improving to ask for the right thing. It is very much a creative process even if the mantra is often AI is taking over.
Creating videos from images can be a game-changer for Metaverse-related content—there has a straightforward guide on this, which might be useful if you’re exploring ways to add dynamic visuals to your projects.