The past few weeks has seen a wave of virtual world generation AI models appear. It is a natural next step from images and videos. Google Genie 3 is released to some early testers lots of videos of it working doing the rounds. Hunyuan World 1.0 is an open source world generator, I am still trying to get a development environment to work properly to try it, as with all dev tech if you don’t keep the environments updated then things just get out of hand and don’t work. Wheels within wheels of tools and dependencies and it always has to be just the right version, not too old and certainly not new! It was whilst battling my own machine and virtual machines within it that Mirage 2 appeared on the web Mirage: AI UGC game engine so I gave that a go.
This allows you to provide a picture then create or AI gen a description of the world and the character. Then gives you a few seconds of walking around in that world as it generates it on the fly.
The first thing I used was one of the preset prompts and images of a cyberpunk world. Rather then walk forward I turned around and heading into the unknown areas behind. It worked, obviously early days for this but when I think back a few years for midjourney’s starting images, as I started in 2022 with them, it feels the same vibe and opening up of potential new forms of experience.
Next I used a photo of my wife in the garden, taken as a sort of video game view. I had not given it much detail about her so it goes a little off track, but the principle of being able to walk to the end of our garden using game controls was exciting.
Bear in mind this was a quick AI prompt based on the photo and then just a few seconds waiting in a server queue before then being live in the world walking around.
I am very much looking forward to experimenting some more as these tools improve just as all the others have. Maybe Reconfigure’s movie might be a bit more interactive, or some hybrid media yet to be created ๐ Hello metaverse.