Learning to draw

I have been trying to drop in alteration one creative activity each day, sometimes its guitar, sometimes its drawing, sometimes its both. I had started to try and learn to draw a few years ago but now I have the time and inclination again I took up a subscription to 21draw.com. This was because the artist Mark Kistler whose excellent “You can draw in 30 Days” that gave me some instant enjoyment and techniques when I tried the last time has several series on 21 Draw. I had started back on that and will go back to it, but I got very interested in @Rodgon and his learn to draw anatomy in 21 days class. This class brought me an instant revelation, which is obvious, but of course they aren’t when you don’t know them. I had been trying to doodle Tai Chi moves as a way of contemplating and understanding them. I was looking at them as “photos” in my head and attempting the outline view, a drawe what you see approach. With this course it was about learning to draw people from the inside out, to consider the connecting points and rotation of joints and stretch of muscle groups. Tai Chi is more about inside than outside, when I use 3d models in game engines with is about kinematics and bones etc. Hence it should have been obvious that I should try and draw this way, not try and trace an outline, a silhouette of someone then “colour it in”. Boom ! A ping of enlightenment and a course to help. It is one of many but you have to start somewhere.

Not only is this an enlightening idea for me, but I have also realized that just as I have spent many years learning martial art approaches I should treat these other life skills in the same way, just a step at a time. Every bit of basic practice is as important as any of the fancy stuff. There are no easy fixes or paths to anything, but there are ways to find things to help and accelerate the right sort of learning. Learning things should be fun, even if difficult and challenging. It is not a case of no pain no gain. Anyway.. on with the learning.

Basics

The course starts on very basic shapes, connecting circles in cylinders, making circles become spheres with a space and volume to consider. Connecting points on either side of a sphere becomes the basis of Rodgon’s approach to all body parts, but starts with heads 🙂

Head and neck

Before you know it you are exploring head and neck positions and shapes and its feeling rather good.

Poses

Body poses and more complex shapes soon start to appear.

It’s all still sketches and scrappy lines but learning thing like his mantra of outside edge first then inside for volume when drawing arms and legs, combined with overlapping shapes is really fascinating. I am looking forward to using this and the other drawing techniques I m learning from a how to draw manga book, and the shading and styles of Mark Kistler to do some things that are going to help me learn more about Tai Chi and also storyboard my books. There are plenty more 21 draw courses to do too.

I am playing with both pencil and paper and also Procreate on the IPad.

It is odd that with after AI generation capability I have, I am feeling the need to keep it human and learn the craft, the art. I may just stick to these basics, but each time I doodle something there is a learning moment. Practice makes perfect 🙂

The Flickr album of my scrawls on this anatomy course are here

https://www.flickr.com/gp/epredator/3Tk65bdP53

I have retired – wahoooo

Today marks the first day of my official retirement!!!

I have completed the main mission of the Role Playing Game (RPG) called work, having maxed my levels and dealt with quite a few side missions that came my way. My work as a software engineer (programmer as we used to say), IT Architect, Inventor, Technology Blogger, Metaverse Evangelist and virtual world builder and finally Industry Analyst for industrial IoT formed the main story arc. Of course, TV presenter, Sci-Fi author and Dr of Technology(hons) are all part of the fun too.

I have worked with some wonderful people over the years too and learned a lot from everyone, so more than anything thank you all!

Of course huge love and thanks also go to my very supportive family, especially my wife Jan, who will now have to put up with my new relaxed state, but will keep me busy I am sure.

Time now for more technology and creative pursuits with an extra helping of martial arts as my character sheet still has some numbers to ramp up. Aiming to write book 3 in the Reconfigure series next year too. Obviously, any advisory roles that pop up I will happily take a look as I have spent this 35+ year career helping others and will continue to do so.

A lot fewer people will see my ever-growing collection of items on my backdrop (hence the video) of my office on video calls, an ice breaker and memory aide in many conversations, but it’s not going anywhere either.

Once again, thank you everyone who has made this part of the journey of life such a blast. I hope you all get through your main mission with as much fun as I have had!

I will continue (or rather restart) posting here on Feeding Edge and all the other places I may have been a bit quieter on recently. There are so many interesting things to observe and engage with.

Generating virtual worlds with AI

The past few weeks has seen a wave of virtual world generation AI models appear. It is a natural next step from images and videos. Google Genie 3 is released to some early testers lots of videos of it working doing the rounds. Hunyuan World 1.0 is an open source world generator, I am still trying to get a development environment to work properly to try it, as with all dev tech if you don’t keep the environments updated then things just get out of hand and don’t work. Wheels within wheels of tools and dependencies and it always has to be just the right version, not too old and certainly not new! It was whilst battling my own machine and virtual machines within it that Mirage 2 appeared on the web Mirage: AI UGC game engine so I gave that a go.

This allows you to provide a picture then create or AI gen a description of the world and the character. Then gives you a few seconds of walking around in that world as it generates it on the fly.

The first thing I used was one of the preset prompts and images of a cyberpunk world. Rather then walk forward I turned around and heading into the unknown areas behind. It worked, obviously early days for this but when I think back a few years for midjourney’s starting images, as I started in 2022 with them, it feels the same vibe and opening up of potential new forms of experience.

Next I used a photo of my wife in the garden, taken as a sort of video game view. I had not given it much detail about her so it goes a little off track, but the principle of being able to walk to the end of our garden using game controls was exciting.

Bear in mind this was a quick AI prompt based on the photo and then just a few seconds waiting in a server queue before then being live in the world walking around.

I am very much looking forward to experimenting some more as these tools improve just as all the others have. Maybe Reconfigure’s movie might be a bit more interactive, or some hybrid media yet to be created 🙂 Hello metaverse.

Celebrating 35 years of full time tech (though really about people) work

generated old Father Time

Wednesday 13th August 2025 marked 35 years since I started full time work in the tech industry after graduating. I had posted this following thought on LinkedIn and Facebook and it seemed to resonate with people, so whilst the web is still here and not swamped with AI Agents doing deals I thought I would share it a little more permanently.

I started at IBM where I had already had a year working as part of my degree. Given I also had my first program published when I was 14 in CVG magazine (a lunar lander game on c64) I have had a long old career, on top of the 35 years “officially” mostly in future and emerging technology. (Image is of a cyberpunk old Father Time using new Adobe Firefly app on the phone).

Why something exists and how it works and where it’s going I find hugely interesting still. Things that are experiential being the biggest draw. What does it feel like to communicate in a virtual world? what does it feel like driving an electric car? How does a story or narrative get to us? How do you think about creating? Whilst the rise of AI taking care (supposedly) of all the drudge in work our experiences and things that make us feel will become more prominent. A lot of business focus becomes numbers, how much cash, how many shares but it would seem we, as thinking feeling people are the next emerging technology. Or maybe we always have been. 🙂 Anyway here’s to the past 35+ years 🙂

GenAI video gets even better with Midjourney

Two years ago I generated the first scene in my first novel in midjourney, as a static image, then got the Runway app to bring Roisin to life, this was the result. The post about it was here and later on on keeping a consistent character in the still image generation here

It was good and somewhat amazing but this week midjourney added a powerful new video generating model, and the same image gave this even more stunning result.

The new one has much more consistency and intricate details, down in the bottom right a silver zip toggle sways as she types. The music was not AI just added on a youtube short. Midjourney has the stated goal of getting to generate worlds i.e. metaverse. This seems to be at least as good as Google’s Veo3 that we just got here. It’s much harder to spot the progress in the LLMs, but this is a good visual indication of the speed and improvement 

Also the flying penguins get a look in too. It’s been tricky to get penguins to fly so far but Google Veo3 did this one, the desert island with a soundtrack generated too. Hard to see they are penguins but they are 🙂

Whilst that looked really good the ultimate one came from midjourney video again. Look at this 🙂 (Sound added from YouTube short editor

Another metaverse explainer layer

Just before the holidays I posted an experiment video built with genAI tools to explain the evolution of the metaverse that may not always be obvious, as people often wait for a big bang of a product. Instead we see the ever increasing digital transition of our real time interactions, just as we have done with maps to GPS location to full GPS with traffic and route finding. Which is itself s digital twin of the world that many of us interact with daily in our cars and on our phones. That video is in this post.

Following on from that is this one that shows further things that can be done with the very same assets that made a sequential video, but now split into a presentation layout using virtual space. Plus a little bit about how dynamic virtual worlds can be. The primary message is that a GenAi video in a virtual world is not that far from being an entire virtual world to explore itself. We explore 3d data all the time ourselves in games, and in mapping too. Sometimes we do need to be spoon fed content, as in a video or a ppt deck, but other times it’s better to look and experience at your own pace. This virtual world concept for presentations is not a new one. Back in 2009 I wrote about trying a different presentation style laying out panels in Second Life, and also used it for rehearsing my first of many TV slots too in 2010 this one on 3D printers. Now what has become easier is the creation of the content in the first place. Well, I say easier, it’s different, still lots of trial and error and you need a bit of a vision for what you are trying to do.

I used my spatial.io account for this, but also to extend tech experiment I used a custom deployment through Unity to see how it all worked from that point of view too. So this is a mix of native spatial.io tools and their base world items, and some extras that I pumped int through Unity. (They have recently changed their licensing and access to not posting the spatial link as it may need some work 🙂 )

Reusing metaverse content in the metaverse to describe the metaverse….

Talking with an AI of Roisin from my novels

Achievement unlocked. I just ran a local only version of a #genAI LLM and gave it the text of my 2 sci-fi novels Reconfigure and Cont3xt. Amongst other things I have had a conversation with my lead character Roisin!. Separately (as in the photo below) I also asked if the books helped solve if we live in a simulation. A great muse to chat about the potential of the 3rd book. reconfigurebook.co.uk Not perfect, but I can let it know what it’s not got quite right and also helps me remember the intense process of writing them in 2015, as they flowed onto the page like binge watching a boxed set.

I used the https://www.nomic.ai/gpt4all and just added one of Llama models to it. Giving it a fresh direct with a copy of the book PDFs was enough to get going. This is a MBP M2 chip machine, but there was no delay in having a conversation or diving right into the text or the personality of Roisin. Words are obviously much quicker to process than generating images or video.

AI Roisin picked up on some of her mannerisms in the books and played heavily on the various situations she has encountered. A lot of the book is about her inner voice and intentions so genAI had a lot to go on.

The wider world of the books, the tech and the philosophical elements of the story is something that it was not always getting quite right. Things changed a bit when it said it was trying not to generate spoilers, and I pointed out I wrote it, so the LLM changed tone and intention a little. It was the usual thing of asking it to describe something is it tells you x and y but not z. You mention z and it’s “sorry my mistake yes you are right”. However for a scoot through the lore, the background, some of the other characters this is all good. People may have read the book and got a different feel for something so it’s good to not treat it too rigidly.

My favourite part was when Roisin switched to whispering some extra details about something, an out loud statement followed by a psst…. listen type of moment. I have only had a few tech powered moments of that impact ever.

As with my previous renders of Roisin as an image and a video I am looking forward the ongoing evolution of this so I can hang out in a metaverse version of my created world and characters.

A short video about Metaverse

I decided to create a very short metaverse evolution explainer video, but using GenAI. It is based on looking at patterns, we have all got used to moving from paper maps to digital ones, with GPS and now fully instrumented digital twins with traffic and other information. That leap applies to many other use cases. It’s all metaverse – digital to physical and back again, people, machines and data. All on my own midjourney.com, runway.ml and luma.ai accounts. Also learn a lot more about how hard it can be to wrangle AI to what you really want, but it works 🙂

Metaverse evolution

What was mad about this was I generated my key images in midjourney and gave a couple of goes at runway that I was happy with (also spliced together and the talking soundtrack is runway), but a day after I had it where I wanted Luma.ai went live and I gave that a go. For a few of the scenes it was just much more what I needed. There is a point where you just have to hit publish, but these things just keep improving as a base tool, let alone the skills also improving to ask for the right thing. It is very much a creative process even if the mantra is often AI is taking over.

A presentation about everything

I recently gave a BCS presentation online for the Animation and Games specialist group and anyone else who wanted to come along where I kind of took some of the individual subjects that I have been engaged with in emerging technology and tried to describe them all in context with one another. It was a bit of a mad thing to try and do, but I also looked at how we might all be able to understand some of the technology advances by cutting through the jargon. It got a moderately philosophical with both fractal thinking and bringing in ying/yang concepts that my brain is pondering more due to learning Tai Chi. Quite a combination?

The presentation is now available on YouTube if you want to take a dive into “IoT, 5g, AI/GenAI, Cloud streaming, Edge computing, metaverse, Spatial computing, AR, VR, XR, quantum computing, industry 4.0/5.0, Brain computer interfaces (BCI), CRISPR, open source, crypto, Web 3.0 (the list goes on)”

The full BCS page with all the blurb and a download of the deck (minus the video element) is here

Warning, also has adverts for my books Reconfigure and Cont3xt, also related to all these concepts 🙂