reconfigure


Consistent characters in Midjourney GenAI

It’s been a while since I posted anything for lots of reasons, that’s an offline conversation. However, this weekend something appeared on my feeds that was just too exciting not to do a quick test with and then share, which I already have on Twitter, LinkedIn, Mastodon and Facebook plus a bit of instagram. So, many channels to look at!

Back in August 2022 I dived into using Midjourney’s then new GenAI for images from text. It was a magical moment in tech for me of which there have been few over the 50+ years of being a tech geek (34 of those professionally). The constant updates to the GenAI and the potential for creating things digitally, both in 2D, movies and eventually 3D metaverse content has been exciting and interesting but there were a few gaps in the initial creation and control, one of which just got filled.

Midjourney released its character constant reference approach to point a text prompt at a reference image, in particular of a person, and to then use that person as a base for what is generated in the prompt. Normally you ask it to generate an image and try and describe the person in text or by a well known person, but accurately describing someone starts to make for very long prompts. Any gamers who have used avatar builders with millions of possible combinations will appreciate that a simple sentence is not going to get these GenAI’s to get the same person in two goes. This matters if you are trying to tell a narrative, such as, oh I don’t know… a sci fi novel like Reconfigure? I had previously written about the fun of trying to generate a version of Roisin from the book in one scene and passing that to Runway.ml where she came alive. That was just one scene, and to try any others would not have given me the same representation of Roisin in situ.

Midjourney experiments
Initial render in midjourney some months ago

The image above was the one I used to then push to runway.ml to see what would happen, and I was very suprised how well it all worked. However, on Friday I pointed the –cref tag in midjourney to this image and asked for some very basic prompts related to the book. This is what I got

epredator_in_a_computer_room_hacking_--cref_httpss.mj.runBEGdSJ_962db28e-f687-45b3-a6f1-dd3853750dc9
Another hacking version this time in a computer room not a forest
epredator_a_girl_running_from_a_car_--cref_httpss.mj.runBEGdSJ3_5d642f40-d948-4be8-93ae-0e5afc8c8254
A more active shot involving a car
epredator_a_girl_running_from_a_car_--cref_httpss.mj.runBEGdS_d3f394d0-6389-41e6-8050-cfe0d98a6827_1
A different style, running at the car but similar to the previous one
epredator_buying_marmite_in_the_supermarket_--cref_httpss.mj.ru_29a1e69f-65ad-47fe-aaae-794e3e23bbd5
Looking to buy some Marmite (a key attribute in the story and life of Roisin)
epredator_buying_marmite_in_the_supermarket_--cref_httpss.mj._35656c46-5f87-435a-ba91-b4d8c20eef0b_1
Another marmite shopping version
epredator_snowy_scene_with_girl_--cref_httpss.mj.runBEGdSJ3q2_5502223a-4746-4847-8218-ab1d13deea7b_2
A more illustrative style in the snow (sleeves are down)
epredator_snowy_scene_with_girl_--cref_httpss.mj.runBEGdSJ3q2_5502223a-4746-4847-8218-ab1d13deea7b_3
Another snow stylistic approach (notice sleave up and tattoos)
epredator_in_a_computer_room_hacking_--cref_httpss.mj.runBEGd_e3489682-baf7-4265-881e-0ab6aa8a6d95_2
A more videogame/cell shaded look to the computer use in the bunker.

As you can see it is very close to being the same person, same clothing, different styles and these were all very short prompts. With more care, attention and curation these could be made to be even closer to one another. Obviously a bit of uncanny valley may kick in, but as a way to get a storyboard, sources for a genAi movie or create a graphic novel this is great innovation to help assist my own creative process in ways that there is no other way for me to do this without some huge budget from a Netflix series or a larger publisher. When I wrote the books I saw the scenes and pictures vividly in my head, these are not exactly the same, but they will be in the spirit of those.

Reconfigure – The Movie, nearly

Having my books Reconfigure and Cont3xt I am always using them and the story I created as a way to explore other mediums. I started drawing a few years ago to see if I could maybe go manga with it, that included buying some action figure mini drawing mannequins that have some tech and weapon props. I created some meta humans and have a unity based mocap suit from a kickstarter to see if I could produce anime. Audio voice generation has been another thing I tried. Each of these projects would be a significant amount of time, effort and probably cost too, but delving in as a mini proof of concept gives me an appreciation of what would be needed.

Midjourney and the AI image generators have offered another option to explore, an very quickly too. One of the challenges is to get the same character to appear in each generated image (i.e. story board). I decided to sort that out later in this POC. Though I did suggest that the actress to play Roisin should be Ruby Rose to get a potential consistent likeness.

Midjourney experiments
Roisin at the start of Reconfigure in a bit of a last minute situation getting off the grid

Having got this image, which was pretty good for a first attempt, with very little text in the prompt I popped over to runway.ml and tried its new gen-2 image to video generation. This time with NO PROMPT at at all! It decided she should look up and stare into the distance, which in the book she is doing as she is being chased and very aware of her surroundings as she tries to outwit those after the amazing tech she has discovered. They only generate short clips on the free accounts, but those few seconds I found to be quite astounding and bewitching.

GenAI is obviously on a major drive into all sorts of areas, but the ability to potentially use tools like this to create a film version of my books on my own time and budget is emerging as a strong contender, unless Netflix or Prime decide they would like to do it instead 🙂

Pixel clothing – it’s back (again)

I saw some links the other day to a new clothing and bag range here at Loewe’s that I thought, hang on I have seen that somewhere before 🙂 Not posting a picture because its would be their copyright image so just go have a look instead.

This range is physical, but made to look pixelated and game like with chunky and stepped back borders and gradient colouring. It makes something real look a bit like it’s out of minecraft. There have been a few things before like shades to match the Thug Life internet meme, but this is proper expensive clothing, Zoolander territory.

The things was that in 2016, as I have tweeted and popped onto Linkedin, I wrote a little piece of detail about Roisin in Reconfigure to enhance her gamer geek chic status that said the following

“She slipped on her red Converse shoes and her grey SuperDry hoody. The phone just 
squeezed into her jeans front pocket. She picked up her odd cartoon looking bag, full of all the 
essentials she might need, lip gloss, money, cards, keys, moleskin notebook, hand wipes. The 
handbag was a regular size but looked slightly larger due to the cartoon cel shaded look that it had. 
The front panel looked like a 2d drawing of a 3d object in a video game. Thick black lines added 
around the primary coloured sections. “

Now I can’t claim to have invented this fashion range, because as a writer, with a story set in the current day I drew upon lots of things I had already seen or experienced, i.e. write what you know. However, I am really happy that Roisin’s look from 2016 is now popping up again in 2023. In the same way my first ever public presentation when I started as an industry analyst with 451 Research back in 2016 showed the direction towards an autonomous enterprise, such as we are seeing some people create with AutoGPT/ChatGPT and alike right now.

Don’t even get me started on the metaverse 🙂

Why an IoT book?

The Internet of Things (IoT) is an increasingly popular tag to place on almost everything out in the industry at the moment. It has far surpassed any discussion of Web 2.0 or Internet 3.0. In part it works because it does not have a version number. We all know about ‘The Internet’ as a support structure for the World now. The increase in mobile and app usage has helped separate it from ‘The Web” for many people. We used to, in briefings, have to explain the Internet as the connection of all the devices and the web was an information layer on top. The Web is not the the Internet. I think the average person in the street, bar, front room, office etc would probably think of the Internet as getting a Wi-Fi signal or a 3G/4G set of bars on their smartphone.
It is this social awareness, of connectivity, that lets the term IoT resonate with people. We all have devices that are connected to our WiFi or have some sort of sim card in them, that do not need us to be present for them to operate and communicate. Xbox and PS4 patch themselves over the air, dropbox and iCloud etc. synch their fields all over the place so we always have what we need. These are all simple everyday items already connected to the net and doing their thing. So they are obviously part of the Internet of Things, in common parlance.
That common understanding of connectivity and remote action runs into everyday life and therefore into the everyday life of people in business, CEOs, CIOs and alike combined with new product development and consultants helping shape all of industry.
So, yes, it’s another buzzword to hang everything on, but also, yes it’s really very important to us all. In a recent talk to Predlet 1.0’s year 8 secondary school class I explained to them how important IoT as an idea was to their future. Mainly this is because, aside from a technical career path building things, knowing and appreciated the connectedness of things may generate new industries and businesses that they will work in, or hopefully create.
This pitch was on top of the ‘normal’ discussions of virtual worlds, augmented reality, game technology, 3D printing, brain control devices, and open source attitudes. All those are my bread an butter, but they are also part of the general IoT umbrella of concepts and ideas.
One branch of IoT is that of sensors, instrumenting the World to gain a better insight into what is happening. We have had networked sensors for a while, but they are getting cheaper and more detailed in what they can deal with. This branch smashes into virtual worlds, at least it does where I started with it more publicly in 2006. The World of tennis was being instrumented by Hawkeye. The physical position of the ball was/is captured, via cameras and converted to x,y,z positional data. I used that data to re-visualize the ball tracking in Second Life, in a virtual world. That encompassed a visual representation of an Internet of Things style sensor reading of the physical World. That of course is a one way interaction, but allows many people to immerse themselves in the data at the same time online in a shared space. Tracking those people to help understand what they were watching and doing generated a heat map of activity. That is because it is easy in a virtual world to instrument everything. Avatars and objects re being rendered in a space that has to know where they are in x,y,z space, hence it can be collected and reported on. Making virtual worlds and ideal place to try out large scale instrumentation. What if… there were sensors on this, and there were 1,000 of them dotted around a town, how about 10,000? Oh look something is moving, follow it etc.
Whilst on virtual worlds, one of the first projects that arrived on Hursley island in 2006 in Second Life, was an angle poise lamp. To many people it seemed just like a fancy model, but the guys had wired in the distal lamp to the external World. The lamp was connected by the now open source MQTT protcol. A real lamp in the real lab could be turned on and off with an MQTT message that the lamp was subscribed to. The digital version subscribed to the same message so was in synch with the real world. The virtual bulb lit up when the message was sent. This was also built 2-way though, if you hit the virtual switch it also sent the message, just as the previous web page had. We used it as an example of integration and of pub/sub messaging but also to help people understand the environments are not stand alone, or they don’t have to be.
Roisin's view
Roisin’s view of the World mocked up during storyboarding the novel.
This detail, about instrumentation and altering the World or knowing what is going on is why I had to label my Sci-fi novel Reconfigure, and the follow up Cont3xt as IoT books in Amazon. Currently if you search for “IoT Books” Reconfigure appears in the top ten of books on the subject. I am not jumping on a bandwagon as such, in the book the World is instrumented and Roisin controls and understands it with an virtual/augment interface. Whilst part of the extension into Sci-fi is made up, the grounding is in a long history of having been ardour this stuff and the experience that brings.
BTW don’t forget, a kindle book does not need a kindle to read it. Amazon has a set of apps and readers for most platforms. So if you want to see IoT in the future, check it out.

Just got to try it – Book time

We just spent the past 2 weeks in a villa in Mallorca on our family holiday. It was, as with all holidays, great fun and a change of scenery and pace. Though I still was on the look out for some interesting new projects and had a few calls to discuss things. It is a time to reflect though.
Untitled
I spent a lot of time doing my Choi patterns out in the sun, as well as us all swimming, eating and playing games like Exploding Kittens and Forgotten Dessert. We travelled around a bit in the hire car and visited some interesting places. Caves, mountains and beaches. I managed to read an entire book too. I finally go to reading Andy Weir – The Martian. It was an odd dichotomy to read the claustrophobia of being stranded on an unforgiven planet, but having to use maker and engineer wits to overcome the problems the faced. The technical detail in it was superb, the maths and calculations around how much water, oxygen, fuel, soil, potatoes etc that he needed kept whirring in my brain whilst in this baking heat and relaxed family atmosphere.
We were not cut off completely, we had WiFi in the villa. So the predlets were constantly streaming minecraft and other vloggers. Of an evening I found CBS Action on the TV and got the predlets into watching Star Trek The Original Series, followed by MacGyver which they had never seen before.
Somewhere this all seemed to come together in a blinding flash of inspiration and I suddenly realized I had a scifi story in my head. One I needed to write.
I know, its something loads of people do, but I do love writing. I realized that my technical know how spanning so many years had a value in bringing some genuine flavour to any story. Also I specialise in the near future with emerging technology. Often in articles, and in posts and tweets I extrapolate a little further. I have played a lot of games, watched a lot of films, and like to riff on ideas. My book reading is of course not great, I am not an avid reader, but I think that means the things I have read have a bigger impact on me.
So I came back home, sat down and started. Having mused over the idea for the second week of the holiday I thought I best storyboard it first. After all this is like trying to build software. You need to architect it first, based on the requirements. This requirements I had, mostly. The base premise, some of the development and potential end.
Yesterday was just day 1, it may take years but I got 2 chapters worth of 5,500 words in total out. As I wrote it flowed, it was like watching a boxed set on the TV. I didn’t totally know what was going to happen and things cropped up and needed to be addressed. Again this was very much like coding. You just do it, find the flow and ride it. I hope I can continue to find the flow. I didn’t have a specific character in mind but yesterday they just arrived on the virtual paper I was typing into.
Of course there is no money in this, but whilst I wait and chase those customers and even potentially a regular job I might as well try something that I should have done a long time ago.
It seems that despite my wide ranging experience both technically and sharing with people and doing the right thing I am not finding the right avenues to express those talents and get paid properly.
So my plan is to get it done and just self publish on Amazon or some such place. If it works great, if not then its another experience. I also get to keep my tech skills without writing too much software as the book needs to use the right tools and work in the right way. So I figured this is another excuse to keep up to date with everything.
I started off with Feeding Edge thinking I would write the book about the virtual world experiences in corporate life, some great positives to share but also some ridiculous negative human behaviour in big organisations too. It is the latter that stops me writing that factual book. I may do after this fiction on but I think the scars are still to fresh to want to open those up again.
So here we go, lets see what happens on this little experiment. If a massive piece of traditional work appears requiring my full undivided attention, well at least I have started 🙂
The working title of this is “Reconfigure”. I have not checked out if this already is used in places but I am not married to the title. It’s a real world sci-fi thriller. Full of true technical detail and science fact but asking some challenging questions of the world.
Right, time to write 🙂