I was really happy to be invited back to my favourite podcast GamesAtWork dot Biz to share some air time with Michael and Andy last week. (One less Michael than usual unfortunately). As Andy and I are both in the UK this took the number of Brits on the show up to a majority hence the title.
You can hear the latest episode on all the usual podcast places, or follow this link to ep386 Too many brits
Amongst other things I got to enthuse about the latest iteration of my favourite real guitar teaching and playing app/game Rocksmith, now Rocksmith+ which for me really shows the power of instrumentations of erm…. instruments π The original was back in 2012 and very quickly broke my guitar π I had since 1988. I also learned that the action on that guitar was way too high and my replacement electric was much easier to push down not the strings.
Rocksmith, amongst other things, lets you feel like you are a rock god at concert :), as per the midjourney geared image below. Though the enjoyment in feeling improvement very quickly through still intense practice is wonderful.
I have carried on experimenting with the wonderful Midjourney app on Discord. My previous post had some images and a degree of explanation, but as a reminder. This is one of the new generation of publicly available of AI based image generation from text. Just watching the flow of requests and the range of styles and images that appear constantly is fascinating. It generates 4 images for each text request which you can then curate and push for further generation of more similar to that one. It can also take a web image not generated by it as inspiration.
This however, made me chortle, then stop to ponder, then simply just say way. I said /Imagine Lego John Wick – that is all I asked
Lego John Wick
Look at the variety of styles from the top right and bottom left as actual minifies with shadows to a stylised top left looking as if he is paid in lego pieces not gold, and bottom right an angular representation but a cool image.
Then of course that got me going onto the obvious /imagine Lego predator
Lego Predator
This just looks great, lots of predator variations, which as has happened with the films is pretty much in the storyline to have multiple tribes. (BTW Watch Prey the latest Predator movie on Disney+/Hulu it is really good !
Once again Midjourney hit me with its range of styles it delves into. I moved off the lego thing for a while and asked to see Laurel and Hardy breakdancing
Laurel and Hardy breakdancing
This is a second generation image from a set of already very good images. Laurel and Hardy are of course well known, but there are no breakdancing scenes to refer to. Also, it chose black and white, presumably because most of the source content was black and white. The quirky style of this compared to the photo realistic lego minifigs and the stacks of other images show how amazingly versatile this approach is. You should be able to get a glimpse of the madness on my public facing page in midjourney (if that works )so I can stop posting here all the time saying “look at this: π You may need to change the second in from the right navigation drop down to grids or all, it seems to default to the large upscaled individual images.
Firstly, yes its been a while since I posted here, I could have done with an AI clone of me carrying on sharing interesting things. I was out mainly due to an annoying post-ailment “thing” (medical term) hitting my facial nerves which meant for months any screen time trying to read or write was visually and mentally uncomfortable. That was followed with a dose of Covid that knocked me out for a while but also seems to have had a potential positive impact on whatever else was going on. Anyway, I am back. Anything happened the past few months, I mean 1/2 a year?
I suddenly felt compelled to blog about some of the marvellous work going on with AI image generation of late with things such as Dall-e and Midjourney becoming available to more people. I have so far only used Midjourney, which is accessed through Discord. In the relevant channel you simply type the command /imagine followed by any descriptive text you fancy and it will generate four images based on that, with often surprising results. Any of those images can be selected as the root to generate more in that style, or you can upscale, get a big version, of one you favourite.
Handy hint- The mobile version of Discord really likes to scroll to new items, midjourney works by updating your post to the channel. As so many posts are being generated every few seconds you might think you can leave the app on your post to see the results but very quickly you will zoom up the screen. Constant scrolling back to try and catch your images being generated is a bit of a pain. However if you log into midjourney.com with Discord ID it will present you with your collection and lots of links to get back to do things with them. The Discord search doesn’t seem to work on userid etc so it took me a while to figure this out.
After a few attempts, and with one eye on the political situation and crisis faced in the UK I asked mid journey “an artist forced to stop painting in order to work in a bank” and got this wonderful set of images
an artist forced to stop painting in order to work in a bank
I also asked it to generate a protest by the people
a crowd of down trodden people protesting and rising up against the billionaires and corrupt politicians
Rather than continue the doom and gloom it was time to go a bit more techie π I asked it to show me directing the metaverse from a control centre
epredator directing the Metaverse evolution from a hi-tech digital operations center
An obvious course of action was to explore my Reconfigure and Cont3xt novels and this is where I was very impressed by whatever is going on with this AI. I asked to see “a cyberpunk girl manipulating the physical world using an augmented reality digital twin” and it seems that Roisin (@Axelweight) is in there somewhere π
a cyberpunk girl manipulating the physical world using an augmented reality digital twin
This was worth exploring and picking couple of the images to then seed generating more similar to that which generated these two
a cyberpunk girl manipulating the physical world using an augmented reality digital twin (generation 2 from bottom left of gen 1)
And this one
a cyberpunk girl manipulating the physical world using an augmented reality digital twin (generation 2 from top right of gen 1)
These were good but the first version of the top right image in generation 1 was the one I did a more detail version of on its own. It could be the cover of the third book couldn’t it?
Midjourney generated potential book cover for third in the reconfigure trilogy. Of course one fo these from ore geometric results are more in line with my current cover designs asking “A world of Fractal Iterations, Quantum Computing and strange side effects opened up. It appeared to offer a programming interface to everything around her.”
A world of Fractal Iterations, Quantum Computing and strange side effects opened up. It appeared to offer a programming interface to everything around her.
In case this seems a all a bit too serious, I did ask a slightly weirder question of the AI and got this.
two ducks with a top hat on carrying a salmon
For a happy image how about this /imagine a bright green planet full of life
a bright green planet full of life
But back to the surreal and unusual and predlet 1.0 and I share a joke we “invented” when she was quite young, we were passing the pub in Lock’s Heath having been to the supermarket, she looked at the pavement and saw a creature. She said “dad look there’s a worm coming out of the pub” to which I replied “and he is legless”, and here it is
a worm legless outside a pub
As this kind of AI generation evolves from the static image to the moving one and then onto virtual world and the metaverse, what a wonderfully weird set of worlds we will be able to dream/nightmare up π
As many people who pop along here will know back in 2006 (and even before that) I have been somewhat passionate in explaining and sharing what can be done with things like game technology, virtual worlds etc to help with human communication, that includes entertainment (which is often regarded as not a serious business), office type business, education, industrial digital twins the list goes on. Metaverse is a handy word to bundle all this together and whilst not tightly defined I think many of us know what we mean. This month I wrote a piece for the BCS which is available to read here and they have nicely highlighted the phrase “βTravelling and actual teleportation aside, do you think that we should be able to find better ways to feel we are with people and to have shared experiences?β Whatever you think the Metaverse is evolving to, just consider if the way you are doing things now is the best it could possibly be given the tech available. Things evolve and move, not always replacing everything behind them but adding.
Having sad that, it is worth reminding everyone that the original work in 2006 in Second Life was mostly about pulling data from other sources into the environment, and doing things with it, this eventually led to building Wimbledon’s 2006 course with the ball trajectory from Hawkeye rendered there.
All about data from elsewhere integrating into SL in 2006
There is an ongoing discussions is about interoperability of Metaverse or virtual world platforms. These range from is it desirable to have objects from world x appear in world y and then often hit the barrier of platform x has a completely different rendering engine to platform y. The former is a style and even a business question, why does an environment of any type exist, real, virtual, augmented, projected, whatever. This then leads to the technical wars around content being “in” a development or build environment. Importing of meshes, textures, animation rigs etc is one way to look at Metaverse evolution but I think it is also worth considering the potential for things like remote rendering, if the unifying standard for content sharing was actually containers, of potentially any size or shape and level of detail, the anything from environment x could appear in environment y and even interact with it to vary degrees depending on permissions or needs. My Nvidia RTX ray traced shiny car could appear in the most basic of 3d environments as I am rendering the car my end, and the virtual world is processing a container and stream. This is after all what we do when we video chat, each of our cameras is doing some work and being hosted by a basic integration layer. Yes I know 3d and collisions and physics and and etc. all make it more complex but it is worth considering that other ways of working can exist to the build and compile we are used to from games. Also, from what I remember of Snowcrash avatars are rendered by the person’s own device, those with worse computing power have more basic avatars in world? Also, we already did some of this back n 06/07 it just takes a different way of thinking.
Talking to thinking, literally, I was invited to join the Games at Work dot Biz podcast again and I was more than happy to give a shout out to a book by Prof Richard Bartle on virtual worlds and our god like powers over them. Checkout the podcast and links are here and on any player you have, give it a rating to help the guys too please.
There is a lot of hype of the new Matrix movie but today a demo/experience rendered in Unreal 5 live on Xbox series x and PS5 went live. The intro movie is fantastic, sometimes you can’t tell real from virtual, other times you can, but that often feels deliberate, toying with the uncanny valley.
If you don’t have an next gen console then there is a video, and it is well worth watching, remembering this is live rendered in Unreal 5 not precanned cut scene.
Many of us in tech have an affinity to the Matrix. I remember the first one, sat in the offices in Hursley building early web sites and shops, engaged in the start of digital transformation. The free character drip screen savers everywhere and even the old clunky web we had clips to watch “I Know Kung Fu” etc. Many of us were gamers too so this blaring of physical and digital was really appealing. I even bought the soundtrack, not on old fashioned CD but on Mini disc π Its always pushing the envelope and as this video shows, they are still at it! Metaverse FTW π
What a bumper time for games it is, and on 9th November the latest of my all time favourite racing games Forza Horizon arrived on the Xbox, The games have always had a large free roaming area to collect, drive, race and photograph some fantastic cars. Each generation gets more jaw dropping as it pushes the limits of what can be done with the graphics, sounds physics. Number 4 in the series was set in the UK, building on a huge map that merged many of key locations and and cities into a manageable terrain. Number 5 has moved to Mexico and even more stunning scenes await there. The game is bundled into Microsoft Xbox Game Pass, but being a fan I bought the full version and all the expansions, knowing full well that I will be playing this on and off for years.
The beauty of Forza Horizon is that “where we are going we don’t need roads”. There are roads and tracks and you can just stick to those but you can also tear across the landscape in even the most unsuitable car possible. This may seem at odds with the precision feel of the driving simulation but it always manages to walk that line between arcade lunacy and pure driving simulator. Why would you go tearing around the scenery, well its for the views like this.
Mexico desert and volcano
The scenery alters across many different biomes, the time of day and weather also make for great variations and the game itself switches between spring, summer, autumn and winter every week.
Also of note is the radio stations, prerecorded play lists of rock, dance, classical music that have wonderful DJ voice overs that often refer back to things that are happening such as event s just raced or about to come up. It is enjoyable to just pick a car, and casually (or otherwise) drive around the place seeing the sites and smoking up the tunes.
A feature of Forza that I always share and enthuse about is the custom paint and decals for the cars. The game has managed to preserve artwork with each generation of the game so effort in creating these is not wasted. I use the decals as a demonstration of in game advertising and support with both my martial art and my Reconfigure books represented, along with a stylised predator face based on the Wii Mii version. These designs appear with my car in other peoples games and can also be downloaded, though there are some much better painters doing some great work to check out.
Scooby, Choi, Reconfigure
Doughnuts
There are cars from 3 wheelers to high end supercars, over 500 different models and all hyper detailed in and out. There are drag races, giant leaps and even houses to buy.
Drag race
Leap
House
There are lots of online multiplayer angles to join in on, but I really enjoy driving against the Drivatars of friends and family. Here, a single player race is populated by names and cars of people you know driving to some degree in the way they would drive if they were there. This degree of personalisation is always great fun.
If you are looking at these shots and saying, oh it all looks too shiny to be real though, I saw this vehicle parked outside a London train station in a recent (and long time since I have been out) trip.
Shiny
I have already blasted through many of the races and levels, the last game was one of the few I have level capped and “prestiged” on. That really just comes with lots of time more than any particular skill, though doing well in races, performing superb stunts, drifting, speeding and crashing the scenery all ramp up the levels more quickly.
It is also great that I can play this on PC and on Xbox Series X but also now its available as a cloud game. Its juts a pity that our communication infrastructure on the train lines is not up to using that, but its still good if other members of the family are playing something else at home.
Well what an interesting day yesterday was as Facebook went all out on being a Metaverse company and renamed itself to Meta. It also marked a point that crosses my straight unbiased industry analyst day job with my constancy of being a metaverse evangelist, a title I had in 2006 and maintain. These comments are mine alone and not that of the company I work for. I have lots more to share and discuss on all things Metaverse, but that’s something that people have to pay for nowadays.
However, I will say that I was really pleased to see a slick presentation on all things Metaverse from a multi billion dollar company like Facebook. Last year the same happened to some degree with Microsoft and Mesh at Ignite. The conversations have moved by many people from, “nah we don’t want to interact like that ever” to “no one company should own all this”. The Metaverse the industry and society needs does, as was said in the Meta keynote, need to be able to let people connect with both the physical and multiple virtual realities for a multitude of reasons and with many available ways to be who we are. I have, for many years, talked about how we have had to adjust to the limitations of technology to allow our communication at distance. We still type on qwerty keyboards designed as a layout in part to stop physical typewriter hammers jamming by spacing out the commonly used letters in words to avoid that issue. Not that we need to get rid of this form of typing, but to hold it up as the only way to interact with technology and data seems a little bit odd doesn’t it?
Back a decade or so ago I created a slide for a presentation on virtual worlds and the metaverse that was aimed at showing how far we might be able to go and why. It was in part a presentation about the fact it is not all avatars and islands. Those work, but there are many more things that come together. There are many forms of input, including neural interfaces (see below at the left), there are combinations of virtual to virtual, virtual to real (AR) and as I said at the end virtual to real to real, which is virtual objects becoming physical again (3d printing etc). It is about the connecting of people, a wetware grid, but threaded by narrative and context. The Meta presentation often showed these as quick interactions, thrown a basket ball, looking at a design etc, but the there is of course a longer thread, a digital thread if you like. It’s all very fractal in its nature, the patterns repeat at any scale.
One thing to note though is that everything feeds back into the entire loop, the arrows are multi directional. You might 3d print a “device” or prop you found in a virtual world interaction, but that itself might be an input device for another experience. Imagine if you 3D printed a modelling clay tool you bought in a virtual world then made a clay sculpture that you took a photo of for Flickr, or even scanned the sculpture in and made it a virtual object as a one off that you sold on an NFT market place which then saw it used as a prop in a virtual world movie, and so on. I used to talk about printing off a penny whistle and joining in a virtual band at an irish pub in Second Life. I hope you get the gist.
One key thing though, and it is slightly annoying Facebook have nicked the word but as you see below the entire loop is the meta loop (c 2009).
Describing the Meta loop in 2009 – Its not all avatars and worlds/islands
I know many of us old guard or original gangster, however OG you want to be, might be annoyed at the mega corp going Meta on us and riding high on our hard work persuading the world this is a good idea and the way we are all going. However, it is the way we are going, so we get to say “I told you so”, rather than multi billionaires. Money clearly isn’t everything π We, and I include all my fellow Eightbar people and surrounding community from back in 2006, we were already riding on top of some Double OG work. It’s the nature of waves of societal acceptance and tech development. The hope is that business and experiences that are born of the Metaverse with thrive and bring entertainment, education, excitement and benefit to the world. By metaverse business I mean not the ones building the underlying tech platforms, but the ones thriving on top of them, but then going on to influence the wave after this just as Facebook and co are today. It’s certainly not going away, so my sons birth certificate from 14 years ago saying father’s occupation “Metaverse Evangelist” might turn out to be of use to him someday.
I mentioned above that the physical world is an important part of the metaverse and in the Facebook/Meta pitch there was an instrumented house where the mugs and teapot that were carried in real life updated their position in the digital model. Well, that around the other way is the premise of my Reconfigure and Cont3ct novels. They are very metaverse, IoT, Quantum AR and VR related, but our hero Roisin gets to manipulate the physical world by altering the digital world. It works for an adventure, but it also keeps the “scifi set this afternoon” concept alive and it is not yet all done and dusted π Go take a look it’s very reasonably priced and might spark a few ideas on the back of the future that Meta are pushing. More here or see the ads not the right.
The recording of my BCS Animation and games pitch is now live on the site and on youtube. Unfortunately only the live audience will get to see the larger avatar I was using to deliver this pitch, as the zoom presentation feed is only of the shared screen with a tiny top left insert of the avatar and my dulcet and somewhat stuttering tones tones. It took a few minutes to get revved up and to many I might be stating the obvious but it is surprising how many people don’t get the importance of user generated content and/or expression in virtual worlds and games.
Official site is here but added the youtube below to save a click or two.
As you may have noticed I have been in and around the virtual world and metaverse for a long while. I have spent decades helping people understand the basics of virtual worlds and benefits they bring to us all. The pandemic has highlighted the need to do some things differently so it is not a suprise that Metaverse is trending in all sorts of places once again. You don’t have to look far to find replays of the ebullient money money from US TV Jim Cramer “describing” the Metaverse is such odd mixed up sets of keywords and company names that it had me crying with laughter and with sadness.
Just to follow his stream of blurb he went through. I only need 10 minuted to explain it all to him, as do any of us, or even better just logon to a virtual world, anyone of them and talk to someone, experience something. I was reminded of the conversation about the internet, the web, ecommerce, social media and so many other emerging technologies. We always get to this stage of apparently influential people getting it so wrong yet bringing so much attention to something.
To be fair, we all have to learn things, but maybe we don’t all then talk about them to a large TV audience in the context of financial investments until we do understand the words. I would be one of many to be happy to explain what the vision of the metaverse is, what the practical implementation of that is today and why it is some of the words used but not in the same order or context. Also… its not all about NFTs either !
He said (slightly paraphrased):
“It’s Unity with an oculus looking at a t-shirt its Nvidia?”
“It’s a hologram”
“You go into a room to listen to Mozart but listen to Beethoven and these people don’t exist”
I hope my virtual world descriptions on TV (of which there are a few) and possibly the relationship between virtual worlds that form a proper metaverse are not quite on this level. As on my TV showreel page with the opensim conversation, added here from 2011 for completeness. Yes a decade ago on saturday morning kids TV, a live virtual world. Cmon people lets do this thing !
Virtual worlds in 2011
Meanwhile I am off to google something up on the inter webs.
A good few years ago I started to experiment with body tracking and how it could be used to add another element to my martial art training in Choi Kwang Do. The aim being that if you capture 3D data points of a movement you can recreate that move digitally from any angle and at any speed. Video may be simpler and more immediate but you need multiple cameras, matrix bullet time style, to see moves from every angle. Using the old Microsoft Kinect controller then the newer version 2 (2012 and 2014 even longer ago than I thought π ). A digital capture with human biomechanical context applied to it could also be scaled up or down to compare different body types and sizes too. A digital mirror could help overlay the optimum moves over the live performed moves and so on. The possibilities are quite vast. What was not quite working though was dealing with the speed of movement using camera technology at the time. Also the kinect rig for people lacked some points such as shoulders, and could get confused by body rotation too. It was a good bit of kit but was more for front facing. Two kinexts would solve some of that but it started to get complex.
The interesting news that just appeared from another martial art is in this video from World Taekwando using body sensors attached to students to control virtual characters for distanced sparring.
Using sensors from startup Refract they federation has officially created this technology showing a great deal of technical know how. It looks as if the sensors are measuring the knee and elbow joints for position and relative position to power the avatar. I am not sure how precise it is as a teaching aid but I would love to try it out and see how it works with my martial art.