Flush Magazine 10 – Airbags and ice skating

I took inspiration from @elemming recent fall ice skating in which she broke her wrist for this months article in Flush Magazine. It seemed a few things came to mind around safety and in particular the use of air bags in all sorts of places.
You can read it and see the really nice layout, once again great job making my words and ideas look awesome on the page. Thankyou again @tweetthefashion for all the hard work getting this issue out there.
The direct link is here


As usual its quite a mix but does share a common theme, mars landings, motorbikes and ice 🙂
I hope you enjoy the article, and the rest of the excellent magazine.

Use Case 2 – real world data integration – CKD

As I am looking at a series of boiled down use cases of using virtual world and gaming technology I thought I should return to the exploration of body instrumentation and the potential for feedback in learning a martial art such as Choi Kwang Do.
I have of course written about this potential before, but I have built a few little extra things into the example using a new windows machine with a decent amount of power (HP Envy 17″) and the Kinect for Windows sensor with the Kinect SDK and Unity 3d package.
The package comes with a set of tools that let you generate a block man based on the the join positions. However the controller piece of code base some options for turning on the user map and skeleton lines.
In this example I am also using unity pro which allows me to position more than one camera and have each of those generate a texture on another surface.
You will see the main block man appear centrally “in world”. The three screens above him are showing a side view of the same block man, a rear view and interestingly a top down view.
In the bottom right is the “me” with lines drawn on. The kinect does the job of cutting out the background. So all this was recorded live running Unity3d.
The registration of the block man and the joints isn’t quite accurate enough at the moment for precise Choi movements, but this is the old Kinect, the new Kinect 2.0 will no doubt be much much better as well as being able to register your heart rate.

The cut out “me” is a useful feature but you can only have that projected onto the flat camera surface, it is not a thing that can be looked at from left/right etc. The block man though is actual 3d objects in space. The cubes are coloured so that you can see join rotation.
I think I will reduce the size of the joints and try and draw objects between them to give him a similar definition to the cutout “me”.
The point here though is that game technology and virtual world technology is able to give a different perspective of a real world interaction. Seeing techniques from above may prove useful, and is not something that can easily be observed in class. If that applies to Choi Kwang Do then it applies to all other forms of real world data. Seeing from another angle, exploring and rendering in different ways can yield insights.
It also is data that can be captured and replayed, transmitted and experienced at distance by others. Capture, translate, enhance and share. It is something to think about? What different perspectives could you gain of data you have access to?

A simple virtual world use case – learning by being there

With my metaverse evangelist hat on I have for many years, in presentations and conversations, tried to help people understand the value of using game style technology in a virtual environment. The reasons have not changed, they have grown, but a basic use case is one of being able to experience something, to know where something is or how to get to it before you actually have too. The following is not to show off any 3d modelling expertise, I am a programmer who can use most of the tool sets. I put this “place” together mainly to figure out Blender to help the predlets build in things other than minecraft.With new windows laptop, complementing the MBP, I thought I would document this use case by example.
Part 1 – Verbal Directions
Imagine you have to find something, in this case a statue of a monkey’s head. It is in a nice apartment. The lounge area has a couple of sofas leading to a work of art in the next room. Take a right from there and a large number of columns lead to an ante room containing the artefact.
What I have done there is describe a path to something. It is a reasonable description, and it is quite a simple navigation task..
Now lets move from words, or verbal description of placement to a map view. This is the common one we have had for years. Top down.
Part 2 – The Map
sniptop
A typical map, you will start from the bottom left. It is pretty obvious where to go, 2 rooms up and turn right and keep going and you are there. This augments the verbal description, or can work just on its own. Simple, and quite effective but filters a lot of the world out in simplification. Mainly because maps are easy to draw. it requires a cognitive leap to translate to the actual place.
Part 3 – Photos
You may have often seen pictures of places to give you a feel for them. They work too. People can relate to the visuals, but it is a case of you get what you are given.
snip1
The entrance
snip2
The lounge
snip3
The columned corridor
sni4
The goal.
Again in a short example this allows us to get quite a lot of place information into the description. “A picture paints a thousand words”. It is still passive.
A video of a walkthrough would of course be an extra step here, that is more pictures one after the other. Again though it is directed.You have no choice how to learn, how to take in the place.
Part 4 – The virtual
Models can very easily now be put into tools like Unity3d and published to the web to be able to be walked around. If you click here, you should get a unity3d page and after a quick download (assuming you have the plugin 😉 if not get it !) you will be placed at the entrance to the model, which is really a 3d sketch not a full on high end photo realistic rendering. You may need to click to give it focus before walking around. It is not a shared networked place, it is not really a metaverse, but it has become easier than ever to network such models and places if sharing is an important part of the use case (such as in the hospital incident simulator I have been working on)
The mouse will look around, and ‘w’ will walk you the way you are facing (s is backwards a,d side to side). Take a stroll in and out down to the monkey and back.
I suggest that now you have a much better sense of the place, the size, the space, the odd lighting. The columns are close together you may have bumped into a few things. You may linger on the work of art. All of this tiny differences are putting this place into you memory. Of course finding this monkey is not the most important task you will have today, but apply the principle to anything you have to remember, conceptual or physical. Choosing your way through such a model or concept is simple but much more effective isn’t it? You will remember it longer and maybe discover something else on the way. It is not directed by anyone, your speed your choice. This allows self reflection in the learning process which re-enforces understanding of the place
Now imagine this model, made properly, nice textures and lighting, a photo realistic place and pop on a VR headset like the Oculus Rift. Which in this case is very simple with Unity3d. You sense on being there is even further enhanced and only takes a few minutes.
It is an obvious technology isn’t it? A virtual place to rehearse and explore.
Of course you may have spotted that this virtual place whilst in unity3d to walk around provided the output for the map and for the photo navigation. Once you have a virtual place you can still do things the old way if that works for you. Its a Virtual virtuous circle!

Dear BBC I am a programmer and a presenter let me help

I was very pleased to see that the Tony Hall the new DG of the BBC wants to get the nation coding. He plans to “bring coding into every home, business and school in the UK”. http://www.bbc.co.uk/news/technology-24446046
So I thought, as I am lacking a full time agent in the TV world, I should throw my virtual hat in the ring to offer to work on the new programme that the BBC has planned for 2015.
It is not the first time I have offered assistance to such an endeavour, but this is the most public affirmation of it happening.
So why me? Well I am a programmer and have been since the early days of the shows on TV back in zx81/c64/bbc model a/b/spectrum days. I was initially self taught through listings in magazine and general tinkering before studying to a degree level, and then pursuing what has been a very varied career generally involving new tech each step of the way.
I was lucky enough to get a TV break with Archie Productions and the ITV/CITV show The Cool Stuff Collective, well documented on this blog 😉 In that I had an emerging technology strand of my own. The producers and I worked together to craft the slot, but most of it was driven by things that I spend my time sharing with C-level executives and at conferences about the changing world and maker culture.
It was interesting getting the open source arduino, along with some code on screen in just a few short minutes. It became obvious there was a lot more that could be done to help people learn to code. Of course these days we have many more ways to interact too. We do not have to just stick to watching what is on screen, that acts as a hub for the experience. Code, graphics, art work, running programs etc can all be shared across the web and social media. User participation, live and in synch with on-demand can be very influential. Collections of ever improving assets can be made available then examples of how to combine them put on TV.
We can do so much with open source virtual worlds, powerful accessible tools like Unity 3d and of course platforms like the Raspberry Pi. We can also have a chance to explore the creativity and technical challenges of user generated content in games. Next gen equipment like the Oculus rift. Extensions to the physical world with 3d printers, augmented reality and increasingly blended reality offer scope for innovation and invention by the next generation of technical experts and artists. Coding and programming is just the start.
I would love to help, it is such an important a worthy cause for public engagement.
Here is a showreel of some of the work.

There is more here and some writing and conference lists here
So if you happen to read this and need some help on the show get in touch. If you are an agent and want to help get this sort of thing going then get in touch. If you know someone who knows someone then pass this on.
This blog is my CV, though I do have a traditional few pages if anyone needs it.
Feeding Edge, taking a bite out of technology so you don’t have to.
Yours Ian Hughes/epredator

Talking heads – Mixamo, Unity3d and Star Wars

High end games have increased peoples expectations of any experience that they take part in that uses game technology. Unity3d lets any of us build a multitude of applications and environments but also exposes us to the breadth of skills needed to make interesting engaging environments.
People, avatars and non player characters are one of the hardest things to get right. The complexity of building and texturing a mesh model of a person is beyond most people. Once built the mesh then has to have convincing bone articulation to allow it to move. That then leads to needing animations and program control of those joints. If it is complicated enough with a body then it gets even more tricky with the face. If a character is supposed to talk in an environment up close then the animations and structure required are even more complex. Not only that but if you are using audio, the acting or reading has to be convincing and fit the character. So a single avatar needs design, engineering, voice over and production skills all applied to it. Even for people willing to have a go at most of the trades and skills that is a tall order.
So it is great that companies like Mixamo exist.They already have some very good free rigged animatable people in the unity store, that help us small operators get to some high end graphic design quickly. They have just added to their portfolio of cool things though with Mixamo Face Plus

They have a unity plugin that can capture realtime face animation using video or a web cam. So now all techies have to do is figure out the acting skills and voice work in order to make these characters come alive. I say all 🙂 it is still a mammoth task but some more tools in the toolbox can’t hurt.
They have created a really nice animated short film using Unity which shows this result of this technology, blended with all the other clever things you can do in Unity 3d. Mind you take a look at the number of people in the credits 🙂

Even more high end though is this concept video using realtime high quality rendered characters and live performance motion capture in the Star Wars universe.

The full story is here direct quotes from Lucasfilm at a Technology Strategy Board meeting at BAFTA in London. So maybe I will be able to make that action movie debut after all. There is of course a vector here to consider for the interaction of humans across distances mediated by computer generated environments. (Or virtual worlds as we like to call them 😉 )

Virtual World Jet Lag – Los Santos style

Last week Grand Theft Auto V was released. That of course is something everyone probably spotted. In amongst all the hype, the outrage and the $1 billion dollars in takings there are some interesting nuggets that may have been missed.
GTA V is a giant virtual world simulator. Whilst GTA V Online is not yet live it is a single player experience, it does have a very large and detailed environment. The attention to detail is almost so good as to not be impressive as you tend not to notice it. The environment is very big from a sprawling city, to a desert and a mountain range and lakes that you can explore on foot or on a variety of vehicles on land sea and water ( or even in a cable car or a blimp) act as a backdrop to a set of mission and story arcs. The place is bustling with activity, people and cars everywhere. It is a pity that it could not just have a mode or version of this world that removed a the swearing and the violence just a little. The reason I say this is that just exploring, being in the place would make a great experience in its own right. Heading off to the lake to watch the sunset, flying and landing planes on the runway. The games or darts, golf and tennis that are just sitting in there as a side task all make it a fine example of game technology and design in action. As it is it obviously deserves its 18 rating which in context all makes sense. You can’t though drive down the road or listen to the radio or even play the iFruit Iphone companion app without a torrent of abusive language.
Chop
Chop on Ifruit the companion app.
Me as a gamer and an adult I find this all more than acceptable, however as a parent I would be worried because I know many people will end up letting much younger kids play this without fully appreciating whats going on. Tone and intent is everything in a narrative and the cartoon violence of stealing cars and having shootouts is pretty normal and I would say acceptable. However there are part of GTA V that enter other territory. A scene with the psychotic Trevor extracting information which has to be played through made me wince. I am not sure what is coming next as that is only 1/4 way through.
There is something that felt different though. In GTA V you swap between 3 characters. This stops them seeming like they are your character and turns them more into toys or puppets that you are playing with. The 3rd person view helps re-enforce that as you can clearly see which character you are. Character swaps happen in other games such as Call of Duty but in 1st person if becomes less obvious you are not you. So playing GTA V, customising the characters with hair, clothes and tattoos becomes more of a dress up doll experience than a “hey look at me” experience. I am not sure if this makes it easier to cope with the excesses of some of the characters in that you are almost in god mode rather than sinking into the character. That may just be me but it is definitely a different feel. It is more like looking into an aquarium than being in a swimming pool.
That said is is a very absorbing experience, exploring or following the story. It has always been fun to just drive around in a car listening to the really good soundtracks playing via the various radio stations. The news reports that interrupt the music also match nicely with the activities you have been up to. Again so we done you almost don’t notice.
The in game web experience is also very neat. Facebook and twitter are copied and mocked but if you bother reading the details you will see, just as with the radio news there is a consistency with the in game social media interactions. There is also a working stock market, investing in the right things before a mission that alters the share price is a clever way to get cash.
GTA V has also reached out to the actual web, lifehacker (the Facebook clone) exists for real. Before the game was released could stalk (i.e. like) various business pages such as the barber shops. That then linked in game to a discount or a freebie.
I already mentioned the iFruit app but it too affects the game. If you train up the tamagotchi style dog Chop his action will alter in the game. If you buy custom car parts they will appear. This feature obviously had some trouble at launch, it was not scaled to cope with the day 1 blitz but it seems to be fine now.
I entitled this virtual world jet lag because I have certainly experienced a form of jet lag, or tiredness as you might like to call it. Late nights playing and exploring Los Santos have been needed in order to understand the game and it’s development. However it does feel like I have visited somewhere, in the same way exploring Second Life and alike does. The experience and memories are rich enough to make it feel like a journey has happened. It is large free roaming worlds that have this impact more so than the frenetic close in shooters. They leave you dazed and adrenalin pumped, you remember the small experiences, taking a flag or an unusual frag. GTA V and alike provide so many memorable inputs and ones that are linked to discovery and learning that they are really a place. It is something everyone should try and experience to feel the difference between this and other styles of game. I hope this Los Santos model world can be used for some other experiences too though for those who are not gamers but who might change their minds about the art of game creation and the technical expertise of its creators.

3d Something from nothing

Modelling in 3d from scratch is hard. It is digital sculpture. Whilst the tools help it is really for the hands of an artists as much as a hammer, chisel and block of marble is to create something from nothing. There are qll sorts of wizards and helper programs plus lots of existing artwork in the public domain that let us less artistic more techie people get some things done. I don’t think you can ever replace the talent and eye of an artist but some technology is going to help as a starting point for 3d objects.
I have been looking at Kinect Fusion which uses the kinect to scan and then produce a 3d model of the things it is looking at. This looks like a very promising way to get existing objects into a good format for dropping into virtual worlds and Unity3d.
You will notice from this sample picture from Microsoft that the mesh created can be an OBJ as in normal 3d packages or an STL the format for 3d printers so this is nearly a 3D photocopier.
fusion
I say I have been looking at it, though I have not been able to use it fully yet. The sample is a Windows only application. Whilst the kinect works fine as a “normal” kinect on the Mac the scanning application isn’t supported (for obvious reasons). I run windows on a bootcamp on the 5 year old Macbook pro. Unfortunately that does not have a full DX11 graphics card. Without all the bells and whistles of DX11 the application doesn’t work. So I may have to get a full windows machine as long as I can be sure its full DX11 compliant.
There is another interesting development in the works that @asanyfuleno pinged me way before I saw it explode across my twitter feed. It is the ability to create a 3d object from three simple swipes across a 2d picture.
It sounds crazy but it does seem to work in this video. Which is also honest enough to show cases where this does not work.

This is an academic presentation from SIGRAPH Asia 2013. There are always interesting things to read about from the various SIGRAPH events such as these

Things at SIGRAPH and other high end tech conferences can sometimes seem out of the reach of most of us however (just to bring this full circle) This paper from 2002 just 11 years ago shows a 3d scanning application which performs the same task as the Kinect Fusion application I started this post with.
It was also good too see recently from SIGRAPH the wonderful MCOR paper printer now doing full colour. We had some samples from MCOR on Cool Stuff Collective and I wrote them a little article. Now though they are doing even more clever things with paper and colour.

It is great seeing the advances in tech and art and even more exciting as they mature into accessible tools and toys for anyone to use.

Disney Infinity – Money Grabbing but Metaversal

Disney Infinity arrived on the scene recently. It is Disney’s response to the clever development of Skylanders. Where physical toys are used to select what happens on screen. If you thought Skylanders may have had a slightly cynical collect ’em all, spend ’em up side to it then Disney have ramped that up to a new level.
Skylanders you typically get a set of characters that you can experience the main game with, without buying new characters. Getting a new character for £8 or so gets you some new powers, effectively new lives for a game session and possibly access to a bonus level.
Disney have chosen a more complicated (and expensive route it seems). The starter set gets you 3 characters. Each effectively from a different set of game levels. So you get Cap’n Jack Sparrow, Mr Incredible and Sully from Pirates of the Caribbean, The Incredibles and Monsters University/Inc.
You get a “level” select piece (they call them play set pieces) that when placed on the USB powered platform selects the available levels. This happens to be multiple one for the 3 games but others are individual tokens.
There is also another token type that unlocks a little bit of content. These are the sort of pocket money prices elements, like buying a lego minifig. Ours unlocked a background from Wreck It Ralph.
It is the collection on the right of the picture. A “starter” set.
Disney infinity
So we plugged it is sparked it up and I wanted to play with the predlets. After all the games are multiplayer and there is space for two characters. The problem is that as you enter the Pirates of the Caribbean world you are only allowed to take a pirates character with you. So the base starter set does not let you play the actual game levels as multiplayer. So you have bought 3 single player games.
What you need to do is shell out another £25 on top of the £50 already spent to by the “sidekicks” a character from each series. (On the left in the photo above).
Other playsets feature 2 characters from the same experience, Cars, Lone Ranger etc. So then you are buying a multiplayer game.
You can buy individual characters for each game type, and there are sets to collect, and buy extra characters for the same games. You are enticed to do this in game as it points out you don’t have a character by showing a video.
Not allowing Sully to wander around The Pirates levels or have Cap’n Jack in the big city with the incredibles sounds like an attempt at not having some sort of brand pollution. Or some odd technical reason. However that makes no sense when you get to the more interesting part of Infinity which is the toybox.
This is a freeform multiplayer environment where any character and anything from anywhere can be added and created.
It is a metaverse, a shared world with all the user creation tools we expect in virtual worlds. It is a great leap forward too bringing a degree of quality to the experience of building we don’t often see.
I had a flashback to my first Second Life island when it was just flat land and the possibilities were endless about what could be done there and how it would evolve (and change the path of my life as it turned out). Big stuff!
Deja vu flat piece of land and an avatar # inifinity
Cap’n Jack just stood in a field.
Then a quick menu selection and a little bit of racetrack (in the background) and a physical active beach ball with live shadows and all the trimmings was in world with me.
Deja vu flat piece of land and an avatar # inifinity
Then we found that any character could join so Sully was soon in world and before you knew it we had some funny little games going. Predlet 1.0 made an obstacle course, 2.0 made a floating race track. We found ourselves setting little challenges with the basic things we had.
The trouble with this is that there are thousands of pieces to play with, but…. they are all locked away. As you “play” or explore you complete challenges and achievements. Those unlock stars, as does levelling up or playing the main stories. Those stars let you get a random spin in a selection of the devices and objects available. It is painfully slow grind to unlock things. Not being able to pick and it being a random selection gets very annoying. We wanted more vehicles, we got a buggy eventually but lots of things we didn’t want first.
For a creation tool it is missing the ability to just get on with it. For experienced virtual world builder like the predlets, used to Minecraft creative mode it is a ridiculous application of grind. Worthy of Nintendo’s grind tests.
I hope they patch this and let us just select anything from the palette. Mind you they will probably make us buy some more physical product. That after all is what the extra collectible power disc (the one that unlocked a wreck it Ralph backdrop) does. However these are sold in closed packaging so you don’t know what you are getting to end up with “rare” items and swaps for the playground. Something I have never really liked in kids toys.
So Disney have made something really clever, interesting and wonderful and then locked it behind a big paywall or a giant in game fruit machine. All things that can be rectified to make this brilliant!
I should say too that The figures look like they are good quality but Jack Sparrow must have had too much Rum as he broke off his stand. Not sure how much force cause that to happen but characters on stands with just two small bits of plastic feet might make these not as playable with for kids as Skylanders.
Disney infinity
Brand pollution isn’t so bad it it? (Just updated with this vine 🙂 )

It’s just a doorbell

We had a new door fitted here at the family home/offices of Feeding Edge Ltd. With it we needed a new doorbell. It used to be we had a choice of a few mechanical rings and buzzes but the digital age brought MIDI like ringtones. I had always wanted to build a “record your own” doorbell but never got around to it. So I was very happy to wander into Homebase and see the Byron Wirefree portable MP3 door bell unit.
G33k doorbell
It is a normal doorbell, but with a USB port and the ability to play a 10 second mp3. Of course having this choice is actually tricky. What do we do now, what can we put on there. Music is not a great choice as it may be confused with the radio. I did try the hook for Geekin’ by Will.i.am “get your geek on get your get your geek on”. However I then went for a rich set of sounds from Star Trek and used Audacity to edit up a mash up of messages from star fleet and red alerts. It is very effective though I am not sure the entire family share my enthusiasm.
There may be an embarrassment factor to opening the door when the bell has been rung and the red alert is still going off, followed by a self destruct message 🙂 we will see how it goes. I can of course always change it.
A managed to capture a bit of it in a Vine 🙂 You need to click the unmute top left for the full effect otehrwise its just a flashing light 😉

I guess next would be a playlist of sounds and door tones? Any suggestions? I might get the Predlets to record a chorus of “Door! DOOOOR! Dad!!!! DOOOOOR” 🙂

Is this my weirdest article? – Flush 9

Sometimes an idea gets stuck somewhere in the back of your mind. It sits there and occasionally raises its head to bother you. One way to sort it out is to write it down, and even better share it with others. Sometimes though these ideas and thoughts bundle themselves together and if you think at an abstract level you can link them and get them all out in one go.
That is how I ended up writing the latest Flush Magazine article where I go “From Koi carp to the Xbox One in a Parkour style free run of ideas”. It was really about the Xbox DRM sledge hammer solution but it did seem to offer a pattern or a modern fable to consider with technology adoption and replacement.
My words have once again been beautifully presented by @tweetthefashion some fantastic art work and presentation wonderfully in keeping with the odd direction this article took.
Have a look here page 110 on but also don’t forget the rest of the magazine 🙂