unity3d


Number 5 is alive (Unity3d 5.0)

Yesterday at GDC Unity3d 5.0 got released. As a long term, long time user of it this is an interesting time. Many of the fancier features around how graphic designers can make things look nice are not that high on my list of things I needed, however its all very welcome.
As a pro licence holder I also now seem to get some extra discounts in the store and a few perks which is nice 🙂
Unity 5.0
I had tried the beta of 5.0, but I was mostly on 4.6 as this is the version that every project I have worked in and also introduced the shiny new user interface pieces of code.
Before wrecking any projects with imports and checking all the old stuff worked I thought I would just throw together some things with the new basic prefabs and assets.
There is a nice ready rigged car and also a couple of planes in the kit bag now.
So…
I created a terrain, imported some SpeedTree trees from the asset store. Asked it to mass place the trees for me. I added a wind zone to make the trees blow around.
I then dropped the car prefab not the scene, made it blue. (That gave me the car motion on keys, sounds, skids etc.)
The camera I moved to attach to the car.
Just for fun I dropped a few physics objects in it. Spheres and blocks.
So now there is a car world to raz around in knocking things over.
All this took about 15 minutes. It is here (if you have a unity plugin) Make sue you click to give the world focus then the arrows work to drive around.
OK it is not quite Forza Horizon but as a quick dive into Unity3d it was a good test 🙂
I then built an HTML5 version. This is slightly more clunky as it is early days for this rather than the player. It is also 200mb as opposed to the 30mb of the player version. So rather than sit and look at a blank screen the version is here on dropbox. That makes sense as of course the HTML5 version has to bring its own player type scripts.
Even with Unreal engine now out there for free Unity3d seems much more accessible as a techy.
I am looking forward to exploring some of the new stuff like the mixing desk for audioscapes. I also seem to have Playmaker free to download as part of “level 11” on the shop. That lets you create more flow chart based descriptions of camera pans etc.
The extra elements in the animation system also look intriguing.
As Unity3d covers the entire world of work when it comes to games it is hard to be great at everything, but it does let anyone, techie, designer, musician, animator etc all have a look at one another worlds.
I think the next project, in 5.0 will be the most interesting yet.

MergeVR – a bit of HoloLens but now

If you are getting excited and interested, or just puzzling what is going on with the Microsoft announcement about Hololens and can’t wait the months/years before it comes to market then there are some other options, very real, very now.
Just before christmas I was very kindly sent a prototype of new headset unit that uses an existing smartphone as its screen. It is called MergeVR. The first one like this we saw was the almost satirical take on Oculus Rift that Google took with Google Cardboard. A fold up box that let you strap your android to your face.

MergeVR is made of very soft, comfortable spongy material. Inside are two spherical lenses that can be slid in and out lateral to adjust the divergence of you eyes and get a comfortable feel.
Rather like the AntVR I wrote about last time, this uses the principle of one screen, split into two views. The MergeVR uses you smart phone as the screen and it slides comfortably into the spongey material at the front.
Using an existing device has its obvious advantages. The smartphones already have direction sensors in them, and screens designed to be looked at close up.
MergeVR is not just about 3d experience of Virtual Reality (One where the entire view is computer generated). It is, by its very name Merge, about augmented reality. In this case it is an augmented reality taking your direct view of the world and adding data and visuals to it. This is knows as a magic lens. You look through the magic lens, and see things you would not normally be able to see. As opposed to a magic mirror which you look at a fixed TV screen to see the effects of a camera merging with the real world.

The iPhone (in my case) camera has a slot to see through on the in the MergeVR. This makes it very difference from some of the other Phone On Face (POF – made up acronym) devices. The extra free device I got with the AntVR, the TAW is one of these non pass through POF’s. It is a holder, and lenses with a folding mechanism to adjusts to hold the phone in place. With no pass through it is just to watch 3d content.


AntVR TAW
Okay so the MergeVR is able to let you use the camera, see the world, and then you can watch the screen close up without holding anything The lenses make you left eye look at the right half and the right eye at the left half. One of the demo applications is instantly effective and has a wow factor. Using a marker based approach a dinosaur is rendered in 3d on the marker. Marker based AR is not new, neither is iPhone AR, but the stereoscopic hands free approach where the rest of the world is effectively blinkered for you adds an an extra level of confusion for the brain. Normally if you hold a phone up to a picture marker, the code will spot the marker, the orientation of the marker and relative position in the view then render the 3d model on top. So if you, or the marker moves the model is moved too. When holding the iPhone up you can of course still see around it, rather like holding up a magnifying glass (magic lens remember). When you POF though your only view of the actual world is the camera view of the phone. So when you see something added and you move your body around it is there in your view. It is only the slight lag and the fact the screen is clearly not the same resolution or same lighting as the real world that causes you to not believe it totally.
The recently previewed microsoft Hololens and the yet to be seen Google funded Magic Leap are a next step removing the screen. They let you see the real world, albeit through some panes of glass, and then use project tricks near to the eye, probably very similar to peppers ghost, to adjust what you see and how it is shaded, coloured etc. based on a deep sensing of the room and environment. It is markerless room aware blended reality. Using the physical and the digital.

Back to the MergeVR. It also comes with a bluetooth controller for the phone. A small hand held device to let you talk to the phone. Obviously the touch screen when in POF mode means you can’t press any buttons 🙂 Many AR apps and examples like the DinoAR simply use your head movements and the sensors in the phone to determine what is going on. Other things though will need some form of user input. As the phone can see, it can see hands, but not having a Leap motion controller or a kinect to sense the body some simpler mechanism can be employed.
However, this is where MergeVR gets much more exciting and useful for any of us techies and metaverse people. The labs are not just thinking about the POF container but the content too. A Unity3d package is being worked on. This provides camera prefabs (Rather like the Oculus Rift one) that splits the Unity3D view into a Stereo Camera when running into the right shape and size, perspective etc for the MergeVR view. It provides extra access to the bluetooth controller inputs too.
This means you can quickly build MergeVR 3d environments and deploy to the iPhone (or Droid). Combine this with some of the AR toolkits and you can make lots of very interesting applications, or simply just add 3d modes to existing ones you have. With the new unity3d 4.6 user interfaces things will be even easier to have headsup displays.
So within about 2 minutes of starting Unity I had a 3d view up on iPhone on MergeVR using Unity remote. The only problem I had was using the usb cable for quick unity remote debugging as the left hand access hole was a little too high. There is a side access on the right but the camera need to be facing that way. Of course being nice soft material I can just make my own hole in it for now. It is a prototype after all.
It’s very impressive, very accessible and very now (which is important to us early adopters).
Lets get blending!

(Note the phone is not in the headset as I needed to take the selfie 🙂

An interesting game tech workshop in Wales

Last week I took a day out from some rather intense Unity3d development to head off to North Wales to Bangor. My fellow BCS Animation and Games Dev colleague Dr Robert Gittins invited me to keynote at a New Computer Technologies Wales event on Animation and Games 🙂
It is becoming an annual trip to similar events and it was good to catch up with David Burden of Daden Ltd again as we always both seem to be there.
As I figured that many of the people there were going to be into lots of games tech already I did not do my usual type of presentation, well not all the way through anyway. I decided to help people understand the difference between development in a hosted virtual world like Second Life and developing from scratch with Unity3d. This made sense as we had Unity3d on the agenda and there were also projects from Wales that were SL related so I though it a good overall intro.
I have written about the difference before back here in 2010 but I thought I could add a bit extra in explaining it in person and drawing on the current project(s) without sharing too much of things that are customer confidential.

Why SL development is not Unity3d development from Ian Hughes

I did of course start with a bit about Cool Stuff Collective and how we got Unity3d on kids TV back on the haloween 2010 edition. This was the show that moved us from CITV to ITV prime saturday morning.
I added a big slide of things to consider in development that many non game developers and IT architects will recognise. Game tech development differs in content to a standard application, the infrastructure is very similar. The complication is in the “do something here” boxes of game play and the specifics of real time network interaction between clients. Which is different to many client server type applications (like the web)

After that I flipped back from tech to things like Forza 5 and in game creation of content, Kinect and Choi Kwang Do, Project Spark and of course the Oculus Rift. I was glad I popped that in as it became a theme throughout the pitches and most people mentioned it in some way shape of form 🙂

It was great to see all the other presentations too. They covered a lot of diverse ground.

Panagiotis Ritsos from Bangor University gave some more updates on the challenges of teaching and rehearsing language interpretation in virtual environments with EVIVA/IVY, the Second Life projects and now the investigations into Unity3d.

Llyr ap Cenydd from Bangor University shared his research on procedural animation and definitely won the prize for the best visuals as he showed his original procedural spider and then his amazing Oculus Rift deep sea experience with procedural generated animations of Dolphins.
Just to help in case this seems like gobbledegook. very often animations have been “recorded” either by someone or something being filmed in a special way that takes their movements and makes them available digitally as a whole. Procedural generation uses a sense and respond to the environment and the construction of the thing being animated. Things are not recorded but happen in real time because they have to. An object can be given an push or an impulse to do something, the rest is discovered but he collection of bits that make up the animated object. It is very cool stuff!

Just before the lunch break we had Joe Robins from Unity3d, the community evangelist and long term member of the Unity team show us some of the new things in Unity 5 and have a general chat about Unity. He also did a session later that afternoon as a Q&A session. It was very useful as there is always more to learn or figure out.
We all did a bit of a panel, quite a lot of talk about education of kids in tech and how to just let them get on with it with the teachers, not wait for teachers to have to become experienced programmers.
After lunch it was Pikachu time, or Pecha Kucha whatever it is called 🙂 http://www.pechakucha.org 20 slides each of 20 seconds in a fast fire format. It is really good, covers lots of grounds raises lots of questions.

David Burden of Daden Ltd went first. VR the Second Coming of Virtual Worlds exploring the sudden rise of VR and where it fits in the social adoption and tech adoption curves. A big subject, and of course VR is getting a lot of press as virtual worlds did. It is all the same, but different affordances of how to interact. They co-exist.

Andy Fawkes of Bohemia Interactive talked about the Virtual Battlespace – From Computer Game to Simulation. His company has the Arma engine that was originally used for Operation Flashpoint, and now has a spin of with the cult classic Day Z. He talked about the sort of simulations in the military space that are already heavily used and how that is only going to increase. An interesting question was realised about the impact of increasingly real simulations, his opinion was that no matter what we do currently we all still do know the difference and that the real effects of war are drastically different. The training is about the procedures to get you through that effectively. There has been concern that drone pilots, who are in effect doing real things via a simulation are to detached from the impact they have. Head to the office, fly a drone, go home to dinner. A serious but interesting point.

Gaz Thomas of The Game HomePage than gave a sparky talk on How to entrain 100 million people from your home office. Gaz is a budding new game developer. He has made lots of quick fire games, not trained as a programmer he wanted to do something on the web, set up a website but then started building games as ways to bring people to his site. This led to some very popular games, but he found he was cloned very quickly and now tries to get the mobile and web versions released at the same time. It was very inspirational and great to see such enthusiasm and get up and go.

Ralph Ferneyhough of newly formed Quantum Soup Studios talked about The New AAA of Development – Agile, Artistic, Autonomous. This was a talk about how being small and willing to try newer things is much more possible and needed that the constant churn in the games industry of the sequel to the sequel of the sequel. The sums of money involved and sizes of projects leads to stagnation. It was great to hear from someone who has been in the industry for a while branching out from corporate life. A fellow escapee, though from a different industry vertical.

Chris Payne of Games Dev North Wales gave the final talk on Hollywood vs VR:The Challenge Ahead. Chris works in the games industry and for several years has been a virtual camera expert. If you have tried to make cameras work in games, or played one where it was not quite right you will appreciate this is a very intricate skill. He also makes films and pop videos. It was interesting to hear about the challenges that attempting to do 360 VR films is going to have for what is a framed 2d medium. Chris showed a multi camera picture of a sphere with lenses poking out all around it, rather like the star wars training drone on the Millennium Falcon that Luke tries his light sabre with. This new camera shoots in all directions. Chris explain though that it was not possible to build one that was stereoscopic. The type of parallax and offsets that are needed can only really be done post filming. So a lot has to be done to make this giant 360 thing able to be interacted with in a headset like the rift. However that is just the start of the problems. As he pointed out, the language of cinema, the tricks of the trade just don’t work when you can look anywhere and see anything. Sets can’t have crew behind the camera as there is no behind the camera. Story tellers have to consider if you are in the scene and hence acknowledged or a floating observer, focus pulls to gain attention don’t work. Instead game techniques to attract you to the key story elements are needed. Chris proposed that as rendering gets better it is more likely that the VR movies are going to be all realtime CGI in order to be able to get around the physical problems of filming. It is a fascinating subject!

So it was well worth the 4am start to drive the 600 miles round trip and back by 10pm 🙂

It’s got the lot – metaverse development

My current project has kept me pretty busy with good old fashioned hands on development. However, sometimes it is good to step back and see just how many things a project covers. I can’t go into too much detail about what it is for but can share the sort of development coverage.

(*update 11/6/14 Just adding this picture from a later post that describes this environment)
It is a unity3d multi-user environment with point and click. It works on the web so it needs a socket server to broker the network communications. So it has a Photon Server. That Photon Server is not on their cloud but on a separately hosted box with a major provider. So that needs my attention sys-admin wise configuring and keeping it up to date.
The unity3d system needs to be logged into and to record things that have happened in the environment. So I had to build a separate set of web pages and php to act as the login and the API for the unity3d web plugin to talk to. This has to live on the server of course. As soon as you have login and data, users etc you need a set of admin screens and code to support that to.
The unity3d system also needs voice communication as well as text chat. So that’s through Photon too.
The actual unity3d environment has both regular users and an admin user in charge. So there are lots of things flowing back and forth to keep in sync across the session and to pass to the database. All my code is in c# though sometimes a bit of js will slip in. WE have things like animations using the animation controller and other unity goodies like Navmesh in place too.
I am working with a 3d designer so this is a multi person project. So I have had to set up mercurial repositories and hosting the repo on bitbucket. We sync code and builds using Atlassian SourceTree which is really great. I also have an error tracking system with Atlassian so we have a JIRA. It means when I check code in and push the repository I can specify the JIRA reference number for the issue and it appears logged on the issue log. That combined with all the handy notifications to all concerned.
As I have a separate server component running I had to set up another repository to enable me to protect and synchronise any server changes, the server has its own repository ID so it can pull the unity3d builds to the server too.
There are complications in doing a database communication as Unity will only talk to the server that is is served from using the www classes. So it makes local testing of multiuser a little tricky. The unity dev environment is able to emulate the server name but the built versions can’t so there is a lot of testing bypass code needed.
Oh I forgot to mention, this is all in Arabic too. There is nothing wrong with that except I don’t know the language. Also Arabic is a right to left language so things have to be put in place to ensure that text, chat etc all flows correctly.
A few little problems arose with this. Unity has an excellent Arabic component that allows you to apply right to left to any output text, however it does not work on input fields. That is a bit tricky when you need text chat, typing in questions and responses etc. So I have ended up writing a sort of new input field, I use a text label but capture the keys pass it to the Arabic fixer component which then returns the right to left version that is displayed in the label. I do of course loose things like cursor and focus as the label is an output device but needs must.
In order to support Arabic in html and in the database I had to ensure that the text encoding of everything is UTF-8, there is also a directive tag dir=rtl that helps browsers know what to do with things. However I have found that this works with HTML input fields but seems to not work with password fields. My password field will not let me type Arabic into it. The keyboard language chooser on the mac reverts to uk and Arabic is greyed out. This cause me a lot of confusion on logging in.
There is also the confusion of what to type, it is relatively easy to cut and paste translated Arabic labels into strings, but when testing a chat system or user names I needed to know what english keystrokes generated what Arabic phrase (that’s not a translation thats a how do I type something meaningful in Arabic and see it come up on the screen).
Luckily my good friend Rob Smart came to my aid with “wfhp hgodn” which equates to صباح الخير which is a variant of good morning. It helped me see where and when I was getting the correct orientation. Again this is not obvious when you start this sort of thing 🙂
Anyway its back to layering and continuos improvement. Fixing bugs, adding function. It is pretty simply on paper but the number of components and systems, languages and platforms that this crosses is quite full on.
The project is a 3 person one, Project manager/producer, graphic designer and me. We all provide input to the project.
So if you need any help or work doing with unity3d, c#, photon,html, php, MySQL, rtl languages, cloud servers, bitbucket, mercurial, sourcetree, JIRA then I am more than slightly levelled up though there is always more to learn.
Phew!

Use Case 2 – real world data integration – CKD

As I am looking at a series of boiled down use cases of using virtual world and gaming technology I thought I should return to the exploration of body instrumentation and the potential for feedback in learning a martial art such as Choi Kwang Do.
I have of course written about this potential before, but I have built a few little extra things into the example using a new windows machine with a decent amount of power (HP Envy 17″) and the Kinect for Windows sensor with the Kinect SDK and Unity 3d package.
The package comes with a set of tools that let you generate a block man based on the the join positions. However the controller piece of code base some options for turning on the user map and skeleton lines.
In this example I am also using unity pro which allows me to position more than one camera and have each of those generate a texture on another surface.
You will see the main block man appear centrally “in world”. The three screens above him are showing a side view of the same block man, a rear view and interestingly a top down view.
In the bottom right is the “me” with lines drawn on. The kinect does the job of cutting out the background. So all this was recorded live running Unity3d.
The registration of the block man and the joints isn’t quite accurate enough at the moment for precise Choi movements, but this is the old Kinect, the new Kinect 2.0 will no doubt be much much better as well as being able to register your heart rate.

The cut out “me” is a useful feature but you can only have that projected onto the flat camera surface, it is not a thing that can be looked at from left/right etc. The block man though is actual 3d objects in space. The cubes are coloured so that you can see join rotation.
I think I will reduce the size of the joints and try and draw objects between them to give him a similar definition to the cutout “me”.
The point here though is that game technology and virtual world technology is able to give a different perspective of a real world interaction. Seeing techniques from above may prove useful, and is not something that can easily be observed in class. If that applies to Choi Kwang Do then it applies to all other forms of real world data. Seeing from another angle, exploring and rendering in different ways can yield insights.
It also is data that can be captured and replayed, transmitted and experienced at distance by others. Capture, translate, enhance and share. It is something to think about? What different perspectives could you gain of data you have access to?

A simple virtual world use case – learning by being there

With my metaverse evangelist hat on I have for many years, in presentations and conversations, tried to help people understand the value of using game style technology in a virtual environment. The reasons have not changed, they have grown, but a basic use case is one of being able to experience something, to know where something is or how to get to it before you actually have too. The following is not to show off any 3d modelling expertise, I am a programmer who can use most of the tool sets. I put this “place” together mainly to figure out Blender to help the predlets build in things other than minecraft.With new windows laptop, complementing the MBP, I thought I would document this use case by example.
Part 1 – Verbal Directions
Imagine you have to find something, in this case a statue of a monkey’s head. It is in a nice apartment. The lounge area has a couple of sofas leading to a work of art in the next room. Take a right from there and a large number of columns lead to an ante room containing the artefact.
What I have done there is describe a path to something. It is a reasonable description, and it is quite a simple navigation task..
Now lets move from words, or verbal description of placement to a map view. This is the common one we have had for years. Top down.
Part 2 – The Map
sniptop
A typical map, you will start from the bottom left. It is pretty obvious where to go, 2 rooms up and turn right and keep going and you are there. This augments the verbal description, or can work just on its own. Simple, and quite effective but filters a lot of the world out in simplification. Mainly because maps are easy to draw. it requires a cognitive leap to translate to the actual place.
Part 3 – Photos
You may have often seen pictures of places to give you a feel for them. They work too. People can relate to the visuals, but it is a case of you get what you are given.
snip1
The entrance
snip2
The lounge
snip3
The columned corridor
sni4
The goal.
Again in a short example this allows us to get quite a lot of place information into the description. “A picture paints a thousand words”. It is still passive.
A video of a walkthrough would of course be an extra step here, that is more pictures one after the other. Again though it is directed.You have no choice how to learn, how to take in the place.
Part 4 – The virtual
Models can very easily now be put into tools like Unity3d and published to the web to be able to be walked around. If you click here, you should get a unity3d page and after a quick download (assuming you have the plugin 😉 if not get it !) you will be placed at the entrance to the model, which is really a 3d sketch not a full on high end photo realistic rendering. You may need to click to give it focus before walking around. It is not a shared networked place, it is not really a metaverse, but it has become easier than ever to network such models and places if sharing is an important part of the use case (such as in the hospital incident simulator I have been working on)
The mouse will look around, and ‘w’ will walk you the way you are facing (s is backwards a,d side to side). Take a stroll in and out down to the monkey and back.
I suggest that now you have a much better sense of the place, the size, the space, the odd lighting. The columns are close together you may have bumped into a few things. You may linger on the work of art. All of this tiny differences are putting this place into you memory. Of course finding this monkey is not the most important task you will have today, but apply the principle to anything you have to remember, conceptual or physical. Choosing your way through such a model or concept is simple but much more effective isn’t it? You will remember it longer and maybe discover something else on the way. It is not directed by anyone, your speed your choice. This allows self reflection in the learning process which re-enforces understanding of the place
Now imagine this model, made properly, nice textures and lighting, a photo realistic place and pop on a VR headset like the Oculus Rift. Which in this case is very simple with Unity3d. You sense on being there is even further enhanced and only takes a few minutes.
It is an obvious technology isn’t it? A virtual place to rehearse and explore.
Of course you may have spotted that this virtual place whilst in unity3d to walk around provided the output for the map and for the photo navigation. Once you have a virtual place you can still do things the old way if that works for you. Its a Virtual virtuous circle!

Talking heads – Mixamo, Unity3d and Star Wars

High end games have increased peoples expectations of any experience that they take part in that uses game technology. Unity3d lets any of us build a multitude of applications and environments but also exposes us to the breadth of skills needed to make interesting engaging environments.
People, avatars and non player characters are one of the hardest things to get right. The complexity of building and texturing a mesh model of a person is beyond most people. Once built the mesh then has to have convincing bone articulation to allow it to move. That then leads to needing animations and program control of those joints. If it is complicated enough with a body then it gets even more tricky with the face. If a character is supposed to talk in an environment up close then the animations and structure required are even more complex. Not only that but if you are using audio, the acting or reading has to be convincing and fit the character. So a single avatar needs design, engineering, voice over and production skills all applied to it. Even for people willing to have a go at most of the trades and skills that is a tall order.
So it is great that companies like Mixamo exist.They already have some very good free rigged animatable people in the unity store, that help us small operators get to some high end graphic design quickly. They have just added to their portfolio of cool things though with Mixamo Face Plus

They have a unity plugin that can capture realtime face animation using video or a web cam. So now all techies have to do is figure out the acting skills and voice work in order to make these characters come alive. I say all 🙂 it is still a mammoth task but some more tools in the toolbox can’t hurt.
They have created a really nice animated short film using Unity which shows this result of this technology, blended with all the other clever things you can do in Unity 3d. Mind you take a look at the number of people in the credits 🙂

Even more high end though is this concept video using realtime high quality rendered characters and live performance motion capture in the Star Wars universe.

The full story is here direct quotes from Lucasfilm at a Technology Strategy Board meeting at BAFTA in London. So maybe I will be able to make that action movie debut after all. There is of course a vector here to consider for the interaction of humans across distances mediated by computer generated environments. (Or virtual worlds as we like to call them 😉 )

Jumping into LEAP

I had originally thought I would not bother with a LEAP controller. However new technology has to be investigated. That is what I do after all 🙂
I ordered the LEAP from Amazon, it arrived very quickly. So whilst I might not have been an early adopter, and was not on the developer programme it is not hard to get hold of one now. It is £69.99 but that is relatively cheap for a fancy peripheral.
Giving LEAP a go
It is interesting the self proclamation on the box. “The remarkably accurate, incredibly natural way to interact with your computer”. My first impressions are that it is quite accurate. However, as with all gesture based devices as there is no tactile feedback you have to sort of feel you way through space to get used to where you are supposed to be.
Leap
However the initial setup demonstration works very well giving you a good sense for how it is going to work.
It comes with a few free apps via Airspace and access to another ecosystem to buy some more.
The first one I clicked on was Google Earth, but it was a less than satisfying experience as it is not that obvious how to control it so you end up putting the world into a Superman style spin before plunging into the ocean.
I was more impressed with the nice target catching game DropChord (which has DoubleFine’s logo on it). This has you trying to intersect a circle with a chord and hit the right targets to some blasting music and glowing visuals. It did make my arms ache after a long game of it though!
What was more exciting for me was to download the Unity3d SDK for LEAP. It was a simple matter or dropping the plugin into a unity project and then importing a few helper scripts.
The main one Leap Unity Bridge can be attached to a game object. You then configure it with a few prefabs that will act as fingers and palms, press run (and if you have the camera point the right way) you see you objects appear as your fingers do.
Many of the apps on Airspace are particle pushing creative expression tools. So creating an object that is a particle generator for fingers immediately gives you the same effect.
Leap unity
It took about 10 minutes to get it all working (6 of those were downloading on my slow ADSL).
The problem I can see at the moment is that pointing is a very natural thing to do, that works great, though of course the pointing it relative to where the LEAP is placed. So you need to have a lot of visual feedback and large buttons (rather like Kinect) in order to make selections. Much of that is easier with touch or with a mouse.
Where it excels though is in visualisation and music generation where you get a sense of trying to master a performance and get to feel you have an entire space to play with, not limiting yourself to trying to select a button or window on a 2d screen which is a bit (no) hit and miss.
I spent a while tinkering with Chordion Conductor that lets you play a synth in various ways. The dials to adjust settings are in the top row and you point and twirl your finger on the dials to make adjustments. It is a fun and interesting experience to explore.
Just watch out where you are seen using the LEAP. You are either aggressively pointing at the screen, throwing gang signs or testing melons for ripeness in a Carry on Computing style.
I am looking forward to seeing if I can blend this with my Oculus Rift and Unity3d when it arrives though 🙂

Training in a virtual hospital + zombies

It is not very often I get to write in much detail about some of the work that I do as often it is within other projects and not always for public consumption. My recent MMIS (Multidisciplinary Major Incident Simulator) work for Imperial College London and in particular for Dave Taylor is something I am able to talk about. I have know Dave for a long while through our early interactions in Second Life when i was at my previous company being a Metaverse Evangelist and he was at NPL. Since then we have worked together on a number of projects, along with the very talented artist and 3d Modeller Robin Winter. If you do a little digging you will find Robin has been a major builder of some of the most influential places in Second life.
Our brief was for this project was to create a training simulation to deal with a massive influx patients to an already nearly full hospital. The aim being several people running different areas of the hospital have to work together to make space and move patients around to deal with the influx of new patients. It is about the human cooperation and following protocol to reach as good an answer as possible. We also had a design principle/joke of “No Zombies”

Much of this sort of simulation happens around a desk at the moment, in a more role play D&D type of fashion. That sort of approach offers a lot of flexibility to change the scenario, to throw in new things. In moving to a n online virtual environment version of the same simulation activity we did not want to loose that flexibility.
Initially we prototyped the whole thing in Second Life. Robin built a two floor hospital and created the atmosphere and visual triggers that participants would expect in a real environment.
Second LifeScreenSnapz002

Note this already moves on from sitting around a table focussing on the task and starts to place it in context. However also something to note is that the environment and creation of it can become a distraction from the learning and training objective. It is a constant balance between real modelling, game style information and finding the right triggers to immerse people.

For me the challenge was how to manage an underlying data model of patients in beds in places, of inbound patients and a simple enough interface to allow bed management to be run by people in world. An added complication was that of specific timers and delays needing to be in place. Each patient may take more of less time to be moved depending on their current treatment. So you need to be able to request a patient is moved but you then may have to wait a while until the bed is free. Incoming patients to a bed also have a certain time to be examined and dealt with before they can then be possibly moved again.

A more traditional object orientated approach might be for each patient to hold their own data and timings but I decided to centralise the data model for simplicity. The master controller in world decided who needed to change where and sends a message to any interested party to do what they need to do. That meant the master controller held the data for the various timers on each patient and acted as the state machine for the patients.
In order to have complete flexibility of hospital layout too I made sure that each hospital bay was not a fixed point. This meant dynamically creating patients and beds and equipment at the correct point in space in world. I used the concept of a spawn point. Uniquely identified bay names placed as spawn points around the hospital meant we could add and removes bays and change the hospital layout without changing any code. Making this as data driven as possible. Multiple scenarios could then be defined with different bay layouts and hospital structure, with different types of patients and time pressures, again without changing code. The ultimate aim was to be able to generate a modular hospital based on the needs of the scenario. We stuck to the basics though, of a fixed environment (as it was easy to move walls and rooms manually in Second Life, with dynamic bays that rezzed things in place.
This meant I could actually build the entire thing in an abstract way on my own plot of land, also as a backup.
code
I love to use shapes to indicate the function of something in context. The controller is the box in this picture. The wedge shape is the data. They are close to one another physically. The torus are the various bays and beds. The flat planes represent the white board. They are grouped in order and in place. You can be in the code. Can think about object responsibility through shape and space. It may not work for everyone but it does for me. The controller has a lot of code in it that also has a more traditional structure. One day it would be nice to be able to see and organise that in a similar way.

This created a challenge in Second Life as there is only actually 64k of memory available to each script. Here I had a script dealing with a variable number of patients, around 50 or so. Each patient needed data for several timer states and some identification and description text. Timers in Second Life are a 1 per script sort of structure so instead I had to use a timer loop to update all the individual timers and check for timeouts on each patient. Making the code nice a readable with lots of helper functions proved to not be the ideal way forward. The overhead of tidyness was more bytes in the stack getting eaten up. So gradually the code was hacked back to being inline operations of common functions. I also had to initiate a lookup in a separate script object for the longer pieces of text, and ultimately yet another to match patients to models.

The prototype enabled the customers (doctors and surgeons) to role play the entire thing through and helped highlight some changes that we needed to make.
The most notable was that of indicating to people what was going on. The atmosphere of pressure of the situation is obviously key. Initially the arriving patients to the hospital were sent directly to the ward or zone that was indicated on the scenario configuration. This meant I had to write a way to find the next available / free bed in a zone. This also has to be generic enough to deal with however many beds have been defined in the dynamic hospital. Patients arrived, neatly assigned to various beds. Of course as a user of the system this was confusing. Who had arrived where. Teleporting patients into bays is not what normally happens. To try and help I added non real work indicators, lights over beds etc that meant a across a ward could show new patients that needed to be dealt with.
If a patient arrived automatically but there was no bed they were places on one of two beds in the corridor. A sort of visual priority queue. That was a good mechanism to indicate overload and pressure for the exercise. However we were left with the problem of what happened when that queue was full. The patients had become active and arrived in the simulation but had nowhere to go. This of course in game terms is a massive failure, so I treated it as such and held the patients in an invisible error ward but put out a big chat message saying this patient needed dealing with.
I felt this was too clunky to have to walk around the ward keeping an eye out so as I had a generic messaging system that told patients and beds where to be I was able to make more than one object respond to a state change. This led to a quick build of a notice board in the ward. At a glance red, green and yellow status on beds could be seen. Still I was not convinced this was the right place for that sort of game style pressure. It needed a different admissions process once that was controlled by the ward owners. They would need to be able to still say “yes bring them in to the next available bed (so my space finding code would still be work)” or direct a patient to a bed.
The overal bed management process once a patient was “in” the hospital
SL Hospital
The prototype led to the build of the real thing. It was a natural path of migration to Unity3d as I knew we could efficiently build the same thing in a way we could then simply use web browsers to access the hospital. I also knew that using Exit Games Photon Server I could get client applications talking to one another in in synch. From a software engineering point of view I knew that in C# I could create the right data structures and underlying model to be able to replicate the Second Life version but in a much better code structure. It also meant I could initiate a lot more user interface elements more simply as this was a dedicated client for this application. HUD’s work in Second Life for many things but ultimately you are trying to not make things happen, you don’t want people building or moving things etc. In a fixed and dedicated unity client you can focus on the task. Of course Second Life already had a chat system and voice so there was clearly a lot of extra things to build in Unity, but there is more than one way to skin a virtual cat.

The basic hospital and patient bed spawning points connected via Photon in Unity was actually quite quick to put together, but as ever the devil is in the detail. Second Life is server based application that clients connect too. In Unity you tend to have one of the clients as a server, or you have each client responsible for something and let the others take a ghosted synchronisation. Or a mixture as I ended up with. Shared network code is complicated. Understanding who has control and responsibility for something, when it is distributed across multiple clients takes a bit of thought and attention.

The easiest one is the player character. All the Unity and Photon examples work pretty much the same way. Using the Photon Toolkit you can instantiate a Unity object on a client and have that client as the owner. You then have the a parameters or data that you want to synchronise defined in a fairly simple interface class. The code for objects has two modes that you cater for. The first is being owned and moved around, the other is being a ghosted object owned by someone else just receiving serialised messages about what to do. There are also RPC calls that can be made asking objects on other clients to do something. This is the basis for everything shared across clients.
For the hospital though I needed a large control structure that defined the state of all the patients and things that happened to them. It made sense to have the master controller built into the master client. In Unity and photon the player that initiates a game and allows connection of others is the master. Helpfully there are data properties you can query to find this sort of thing out. So you end up with lots of code that is “if master do x else do y”.
Whoever initiates the scenario then becomes the admin for these simulations. This became a helpful differentiation. I was able to provide some overseeing functions, some start, stop pause functions only to the admin. This was something that was a bit trickier in SL but just became a natural direction in Unity/Photon.
UnityScreenSnapz017
One of my favourite admin functions was to just turn off all the walls just for the admin. Every other client is able to still see the walls but the admin has super powers and can see what is going on everywhere.
Walls

No walls

This is a harder concept for people to take in when they are used to Second Life or have a real world model in their minds. Each client can see or act on the data however it chooses. What you see in one place does not have to be an identical view to others. Sometimes that fact is used to cheat in games, removing collision detection or being able to see through walls. However here is it a useful feature.

This formed the start of an idea for some more non real world admin functions to help monitor what is going on such as cameras textures that let you see things elsewhere. As an example wards can be looked at as top down 2d or more like a cctv camera seeing in 3d. Ideally the admin is really detached from the process. They need to see the mechanics of how people interact, not always be restricted to the physical space. Participants however need to be restricted in what they can see and do in order to add the elements of stress and confusion that the simulation is aiming for.

Screens

Unity gave me a chance to redesign the patient arrival process. Patients did not just arrive in the bays but instead I put up a simple window of arrivals. Patient numbers and where there we supposed be be heading. This seemed to help, though a very simple technique, in a general awareness for all participants that things were happening. Suddenly 10-15 entries arriving in quick seemingly at the door to the hospital triggers more awareness than lights turned on in and around beds. The lights and indicators were still there as we still needed to show new patients and ones that were moving. When a patient was admitted to a bed the I put in the option to specify a bed or to just say next available one. In order to know where the patient had gone the observation phase is now additionally indicated by a group of staff around the patient. I had some grand plans using the full version of Unity Pro to use the path finding code to have non player character (NPC) staff dash towards the beds and to have people walking to and fro around the hospital for atmosphere. This turned out to be a bit to much of a performance hit for lower spec machines, though it is back on the wish list. It was fascinating seeing how the pathfinding operated. You are able to define buffer zones around walls and indicated what can be climbed or what needs to be avoided. You can then tell an object an end point and off it goes dynamically recalculating paths avoiding things and dealing with collision, giving doors enough room and if you do it right (or wrong) leaping over desks beds and patients to get to where they need to go 🙂 )

One of the biggest challenges was that of voice. Clearly people needed to communicate, that was the purpose of the exercise. I used a voice package that attempted to package messages across the network using photon. This was not spatial voice in the same way people were used to with Second Life. However I made some modifications as I already had roles for people in the simulation. If you were in charge of A&E I had to know that. So role selection became an extra feature not used in SL where it was implied. This meant I could alter the voice code to deal with channels. A&E could hear A&E. Also the admin was able to act as a tannoy and be heard by everyone. This also then started to double up as a phone system. A&E could call the Operating Theatre channel and request they take a patient. Initially this was a push to talk system. I made the mistake of changing it to an open mic. That meant every noise or sound made was constantly sent to every client, and the channel implementation meant the code chose to ignore things not meant for it. This turned out to be (obviously) massively swamp the Photon server when we had all out users online. So that needs a bit of work!

Another horrible gotcha was that I needed to log data. Who did what when was important for the research. As this was in Unity I was able to create those logging structures and their context. However because we were in a web browser I was not able to write to the file system. So a next best solution was to have a logging window for the admin that they could at least cut and paste all the log from. (This was to avoid having to set up another web server and send the logs to it over http as that was added overhead to the project to manage). I create the log data and a simple text window that all the data was pumped to. It scrolled and you could get an overview and also simply cut and paste. I worked out the size was not going to break any data limits or so I thought. However in practice this text window stopped showing anything after about a few thousand entries. Something I had not tested far enough. It turns out that text is treated the same as any other vertex based object and there are limits to the number of vertices and object can have. So it stopped being able to draw the text, even though lots of it was scrolled off screen. It meant the definition of the object had become too big. i.e. this isn’t like a web text window. It makes sense but it did catch me out as I was thinking it was “just text”.

An interesting twist was the generic noticeboard that gave an overview of the dynamic patients in the dynamic wards. This became a bigger requirement than the quick extra helper in Second Life. As a real hospital would have a whiteboard, with patient summary and various notes attached to it, then it made sense to build one. This meant that you would be able to take some notes about a patient, or indicate they needed looking at or had been seen. It sounds straight forward but the note taking turned out to be a little more complicated. Bear in mind this is a networked application multiple people can have the same noticeboard open, yet it is controlled by the master client. Typing in notes needed to be updated in the model and changes sent to others. Yes it turned out I was rewriting google docs ! I did not go quite that far but did have to indicate if someone had edited the things you were editing too.
We had some interesting things relating to the visuals too. Robin had made a set of patients or various sizes, shapes and gender. However with 50 patients or so, (as there can be any number defined) and each one described in text “a 75 year old lady” etc it meant it was very tricky to have all the variants that were going to be needed. I had taken the view that it would have been better to have generic morph style characters in the beds to avoid content bloat. The problem with “real” characters is they have to match the text (that’s editorial control), and also you need more than one of each type. If you have a ward full of 75 year old ladies and there are only 4 models it is a massive uncanny valley hit. The problem then balloons when you start building injuries like amputations into the equation. Very quickly something that is about bed management can become about details of a virtual patient. IN a fully integrated simulation of medical practice and hospital management that can happen, but the core of this project was the pressure of beds. i.e. in air traffic control terms we needed to land the planes, the type of plane was less important (though had a relevance still)

It is always easy to lose sight of the core learning or game objective with the details of what can be achieved in virtual environments. There is a time and cost to more content, to more code. However I think we managed to get a good balance with the release we have, and now can work on the tweaks to tidy it up and make it even better.

The application has also been of interest to schools. We had it on the Imperial College stand at the Bang education festival. I had to make an offline version of this. I was hoping to simply use Unity’s publish offline web version. This is supposed to remove the need to have any network connection or plugin validation. It never worked properly for me though. It always needed network. I am not sure if anyone else is having that problem, but don’t rely on it. That meant I then had to build standalone versions for mac and windows. Not a huge step but an extra set of targets to keep in line. I also had to hack the code a bit. Photon is good at doing “offline” and ignoring some of the elements but I was relying on a few things like how I generated the room name to help identify the scenario. In offline mode the room name is ignored and a generic name is put in place. Again quite understandable but cause me a a bit of offline rework that I probably could have avoided.

In order to make it a bit more accessible Dave wrote a new scenario with some funnier ailments. This is where were broke our base design principle and yes we put zombies in. I had the excellent Mixamo models and a free Gangnam style dance animation. Well it would have been silly not to put them in. So in the demos is the kids started to drift off the “special” button could get pushed and there they were.
Zombies
I have shared a bit of what it takes to build one of these environments. It has got easier, but that means the difficult things have hidden themselves elsewhere.

If you need to know more about this and similar projects application the official page for it is here

Obviously if you need something like this built, or talked about to people let me know 🙂

A busy week of sharing ideas

This week has been a very busy one of sharing ideas with all sorts of people in all sorts of places. It started Monday with the BCS Animation and Games Specialist group AGM, with a bit of the usual admin, making sure we are all happy with who is on the committee and inviting a new member to the fold. Then I got to do another presentation. As I had already done my Blended Learning with games one a few weeks before I talked about how close we might be to a holodeck. Mostly based on my article in Flush Magazine on the subject Afterwards though the discussion with some of the game development students turned into a tremendous ideas jam session. It was a very cool moment.
Tuesday I did a repeat visit (after 2 years) to BCS Birmingham for the evening and gave the ever evolving talk on getting tech into TV land for kids. I have to de-video the pitch and pop it onto slideshare as I use footage from The Cool Stuff Collective and I don’t have the media rights to put those bits up online. Something I am always asked about why? I don’t have a good answer as it is out of my hands.
Wednesday was a quick trip to Silicon Roundabout to talk funding and games and startups. Getting off the tube at Old Street and walking to ozone for a coffee based meeting felt like stepping into a very vibrant hub of startup and tech activity. Which it is of course.
Fancy coffee
Thursday was an early start and off to London for the BCS members convention. It was good to catch up with a lot of people I have met over the years through this professional body. It was very refreshing to see a definite fresh approach by the BCS with some great presentations, lots of tweeting and some interesting initiatives.
Lots of companies work with Bcs on computing as 4th science in uk schools #bcsmgcon
The most important was the work the BCS Academy has done (along with industry) to get Computer Science recognised by the department of education as the fourth science in the new syllabus. They really are going to start teaching programming from 5 years up! Result!
The day had already started well when I was honoured to receive an appreciation award from the BCS for the work I do as chair of Animation and Games. As I had done two talks the two days before this was very timely 🙂
A nice certificate. Much appreciated
Friday was a trip to help on the Imperial College London stand at the massive education science fair The Big Bang Exhibition.
Big Bang fair
Excel was buzzing with thousands of visitors.
Big Bang fair
There were tech zones, biology zones, 3d printers, thrust sec and even an entire maker zone.
Big Bang fair
The robot Rubiks Cube solver was there in all its glory too.
Big Bang fair
Space was well represented with real NASA astronauts chatting and drawing quite a crowd.
Big Bang fair
The Imperial College London stand was based around the medical side of things and on the corner of that stand was a Feeding Edge Ltd production 🙂
There will be a proper post on this but here is a link to the official line on the Multidisciplinary Major Incident Simulator
imperialhospitalScreenSnapz005

UnityScreenSnapz017
This unity3d and photon cloud solution (that I wrote the code for) allows for staff to practice a kind of air traffic control situation of dealing with a massive influx of patients. Having to shuffle beds and decide who goes where.
It was great being able to show this to a younger generation at the exhibition. Many came to the stand because they were interested in medicine. However a few came over to talk tech 🙂 They wanted to be programmers and build games. We had some good chats about minecraft too. Many of them appreciated the zombies that we threw into the configurable scenarios too :). If just a few % of the people who came to the fair get excited about science it will be worth it. It seemed though there was a new vibrant interest in all sciences and it is not as gloomy as it may appear sometimes.
Now it’s time to get my Dobok on and go and celebrate the opening of Hampshires first full time Choi Kwang Do facility and help get more people interested in the wonders of Choi. Phew!