unity3d


Training in a virtual hospital + zombies

It is not very often I get to write in much detail about some of the work that I do as often it is within other projects and not always for public consumption. My recent MMIS (Multidisciplinary Major Incident Simulator) work for Imperial College London and in particular for Dave Taylor is something I am able to talk about. I have know Dave for a long while through our early interactions in Second Life when i was at my previous company being a Metaverse Evangelist and he was at NPL. Since then we have worked together on a number of projects, along with the very talented artist and 3d Modeller Robin Winter. If you do a little digging you will find Robin has been a major builder of some of the most influential places in Second life.
Our brief was for this project was to create a training simulation to deal with a massive influx patients to an already nearly full hospital. The aim being several people running different areas of the hospital have to work together to make space and move patients around to deal with the influx of new patients. It is about the human cooperation and following protocol to reach as good an answer as possible. We also had a design principle/joke of “No Zombies”

Much of this sort of simulation happens around a desk at the moment, in a more role play D&D type of fashion. That sort of approach offers a lot of flexibility to change the scenario, to throw in new things. In moving to a n online virtual environment version of the same simulation activity we did not want to loose that flexibility.
Initially we prototyped the whole thing in Second Life. Robin built a two floor hospital and created the atmosphere and visual triggers that participants would expect in a real environment.
Second LifeScreenSnapz002

Note this already moves on from sitting around a table focussing on the task and starts to place it in context. However also something to note is that the environment and creation of it can become a distraction from the learning and training objective. It is a constant balance between real modelling, game style information and finding the right triggers to immerse people.

For me the challenge was how to manage an underlying data model of patients in beds in places, of inbound patients and a simple enough interface to allow bed management to be run by people in world. An added complication was that of specific timers and delays needing to be in place. Each patient may take more of less time to be moved depending on their current treatment. So you need to be able to request a patient is moved but you then may have to wait a while until the bed is free. Incoming patients to a bed also have a certain time to be examined and dealt with before they can then be possibly moved again.

A more traditional object orientated approach might be for each patient to hold their own data and timings but I decided to centralise the data model for simplicity. The master controller in world decided who needed to change where and sends a message to any interested party to do what they need to do. That meant the master controller held the data for the various timers on each patient and acted as the state machine for the patients.
In order to have complete flexibility of hospital layout too I made sure that each hospital bay was not a fixed point. This meant dynamically creating patients and beds and equipment at the correct point in space in world. I used the concept of a spawn point. Uniquely identified bay names placed as spawn points around the hospital meant we could add and removes bays and change the hospital layout without changing any code. Making this as data driven as possible. Multiple scenarios could then be defined with different bay layouts and hospital structure, with different types of patients and time pressures, again without changing code. The ultimate aim was to be able to generate a modular hospital based on the needs of the scenario. We stuck to the basics though, of a fixed environment (as it was easy to move walls and rooms manually in Second Life, with dynamic bays that rezzed things in place.
This meant I could actually build the entire thing in an abstract way on my own plot of land, also as a backup.
code
I love to use shapes to indicate the function of something in context. The controller is the box in this picture. The wedge shape is the data. They are close to one another physically. The torus are the various bays and beds. The flat planes represent the white board. They are grouped in order and in place. You can be in the code. Can think about object responsibility through shape and space. It may not work for everyone but it does for me. The controller has a lot of code in it that also has a more traditional structure. One day it would be nice to be able to see and organise that in a similar way.

This created a challenge in Second Life as there is only actually 64k of memory available to each script. Here I had a script dealing with a variable number of patients, around 50 or so. Each patient needed data for several timer states and some identification and description text. Timers in Second Life are a 1 per script sort of structure so instead I had to use a timer loop to update all the individual timers and check for timeouts on each patient. Making the code nice a readable with lots of helper functions proved to not be the ideal way forward. The overhead of tidyness was more bytes in the stack getting eaten up. So gradually the code was hacked back to being inline operations of common functions. I also had to initiate a lookup in a separate script object for the longer pieces of text, and ultimately yet another to match patients to models.

The prototype enabled the customers (doctors and surgeons) to role play the entire thing through and helped highlight some changes that we needed to make.
The most notable was that of indicating to people what was going on. The atmosphere of pressure of the situation is obviously key. Initially the arriving patients to the hospital were sent directly to the ward or zone that was indicated on the scenario configuration. This meant I had to write a way to find the next available / free bed in a zone. This also has to be generic enough to deal with however many beds have been defined in the dynamic hospital. Patients arrived, neatly assigned to various beds. Of course as a user of the system this was confusing. Who had arrived where. Teleporting patients into bays is not what normally happens. To try and help I added non real work indicators, lights over beds etc that meant a across a ward could show new patients that needed to be dealt with.
If a patient arrived automatically but there was no bed they were places on one of two beds in the corridor. A sort of visual priority queue. That was a good mechanism to indicate overload and pressure for the exercise. However we were left with the problem of what happened when that queue was full. The patients had become active and arrived in the simulation but had nowhere to go. This of course in game terms is a massive failure, so I treated it as such and held the patients in an invisible error ward but put out a big chat message saying this patient needed dealing with.
I felt this was too clunky to have to walk around the ward keeping an eye out so as I had a generic messaging system that told patients and beds where to be I was able to make more than one object respond to a state change. This led to a quick build of a notice board in the ward. At a glance red, green and yellow status on beds could be seen. Still I was not convinced this was the right place for that sort of game style pressure. It needed a different admissions process once that was controlled by the ward owners. They would need to be able to still say “yes bring them in to the next available bed (so my space finding code would still be work)” or direct a patient to a bed.
The overal bed management process once a patient was “in” the hospital
SL Hospital
The prototype led to the build of the real thing. It was a natural path of migration to Unity3d as I knew we could efficiently build the same thing in a way we could then simply use web browsers to access the hospital. I also knew that using Exit Games Photon Server I could get client applications talking to one another in in synch. From a software engineering point of view I knew that in C# I could create the right data structures and underlying model to be able to replicate the Second Life version but in a much better code structure. It also meant I could initiate a lot more user interface elements more simply as this was a dedicated client for this application. HUD’s work in Second Life for many things but ultimately you are trying to not make things happen, you don’t want people building or moving things etc. In a fixed and dedicated unity client you can focus on the task. Of course Second Life already had a chat system and voice so there was clearly a lot of extra things to build in Unity, but there is more than one way to skin a virtual cat.

The basic hospital and patient bed spawning points connected via Photon in Unity was actually quite quick to put together, but as ever the devil is in the detail. Second Life is server based application that clients connect too. In Unity you tend to have one of the clients as a server, or you have each client responsible for something and let the others take a ghosted synchronisation. Or a mixture as I ended up with. Shared network code is complicated. Understanding who has control and responsibility for something, when it is distributed across multiple clients takes a bit of thought and attention.

The easiest one is the player character. All the Unity and Photon examples work pretty much the same way. Using the Photon Toolkit you can instantiate a Unity object on a client and have that client as the owner. You then have the a parameters or data that you want to synchronise defined in a fairly simple interface class. The code for objects has two modes that you cater for. The first is being owned and moved around, the other is being a ghosted object owned by someone else just receiving serialised messages about what to do. There are also RPC calls that can be made asking objects on other clients to do something. This is the basis for everything shared across clients.
For the hospital though I needed a large control structure that defined the state of all the patients and things that happened to them. It made sense to have the master controller built into the master client. In Unity and photon the player that initiates a game and allows connection of others is the master. Helpfully there are data properties you can query to find this sort of thing out. So you end up with lots of code that is “if master do x else do y”.
Whoever initiates the scenario then becomes the admin for these simulations. This became a helpful differentiation. I was able to provide some overseeing functions, some start, stop pause functions only to the admin. This was something that was a bit trickier in SL but just became a natural direction in Unity/Photon.
UnityScreenSnapz017
One of my favourite admin functions was to just turn off all the walls just for the admin. Every other client is able to still see the walls but the admin has super powers and can see what is going on everywhere.
Walls

No walls

This is a harder concept for people to take in when they are used to Second Life or have a real world model in their minds. Each client can see or act on the data however it chooses. What you see in one place does not have to be an identical view to others. Sometimes that fact is used to cheat in games, removing collision detection or being able to see through walls. However here is it a useful feature.

This formed the start of an idea for some more non real world admin functions to help monitor what is going on such as cameras textures that let you see things elsewhere. As an example wards can be looked at as top down 2d or more like a cctv camera seeing in 3d. Ideally the admin is really detached from the process. They need to see the mechanics of how people interact, not always be restricted to the physical space. Participants however need to be restricted in what they can see and do in order to add the elements of stress and confusion that the simulation is aiming for.

Screens

Unity gave me a chance to redesign the patient arrival process. Patients did not just arrive in the bays but instead I put up a simple window of arrivals. Patient numbers and where there we supposed be be heading. This seemed to help, though a very simple technique, in a general awareness for all participants that things were happening. Suddenly 10-15 entries arriving in quick seemingly at the door to the hospital triggers more awareness than lights turned on in and around beds. The lights and indicators were still there as we still needed to show new patients and ones that were moving. When a patient was admitted to a bed the I put in the option to specify a bed or to just say next available one. In order to know where the patient had gone the observation phase is now additionally indicated by a group of staff around the patient. I had some grand plans using the full version of Unity Pro to use the path finding code to have non player character (NPC) staff dash towards the beds and to have people walking to and fro around the hospital for atmosphere. This turned out to be a bit to much of a performance hit for lower spec machines, though it is back on the wish list. It was fascinating seeing how the pathfinding operated. You are able to define buffer zones around walls and indicated what can be climbed or what needs to be avoided. You can then tell an object an end point and off it goes dynamically recalculating paths avoiding things and dealing with collision, giving doors enough room and if you do it right (or wrong) leaping over desks beds and patients to get to where they need to go 🙂 )

One of the biggest challenges was that of voice. Clearly people needed to communicate, that was the purpose of the exercise. I used a voice package that attempted to package messages across the network using photon. This was not spatial voice in the same way people were used to with Second Life. However I made some modifications as I already had roles for people in the simulation. If you were in charge of A&E I had to know that. So role selection became an extra feature not used in SL where it was implied. This meant I could alter the voice code to deal with channels. A&E could hear A&E. Also the admin was able to act as a tannoy and be heard by everyone. This also then started to double up as a phone system. A&E could call the Operating Theatre channel and request they take a patient. Initially this was a push to talk system. I made the mistake of changing it to an open mic. That meant every noise or sound made was constantly sent to every client, and the channel implementation meant the code chose to ignore things not meant for it. This turned out to be (obviously) massively swamp the Photon server when we had all out users online. So that needs a bit of work!

Another horrible gotcha was that I needed to log data. Who did what when was important for the research. As this was in Unity I was able to create those logging structures and their context. However because we were in a web browser I was not able to write to the file system. So a next best solution was to have a logging window for the admin that they could at least cut and paste all the log from. (This was to avoid having to set up another web server and send the logs to it over http as that was added overhead to the project to manage). I create the log data and a simple text window that all the data was pumped to. It scrolled and you could get an overview and also simply cut and paste. I worked out the size was not going to break any data limits or so I thought. However in practice this text window stopped showing anything after about a few thousand entries. Something I had not tested far enough. It turns out that text is treated the same as any other vertex based object and there are limits to the number of vertices and object can have. So it stopped being able to draw the text, even though lots of it was scrolled off screen. It meant the definition of the object had become too big. i.e. this isn’t like a web text window. It makes sense but it did catch me out as I was thinking it was “just text”.

An interesting twist was the generic noticeboard that gave an overview of the dynamic patients in the dynamic wards. This became a bigger requirement than the quick extra helper in Second Life. As a real hospital would have a whiteboard, with patient summary and various notes attached to it, then it made sense to build one. This meant that you would be able to take some notes about a patient, or indicate they needed looking at or had been seen. It sounds straight forward but the note taking turned out to be a little more complicated. Bear in mind this is a networked application multiple people can have the same noticeboard open, yet it is controlled by the master client. Typing in notes needed to be updated in the model and changes sent to others. Yes it turned out I was rewriting google docs ! I did not go quite that far but did have to indicate if someone had edited the things you were editing too.
We had some interesting things relating to the visuals too. Robin had made a set of patients or various sizes, shapes and gender. However with 50 patients or so, (as there can be any number defined) and each one described in text “a 75 year old lady” etc it meant it was very tricky to have all the variants that were going to be needed. I had taken the view that it would have been better to have generic morph style characters in the beds to avoid content bloat. The problem with “real” characters is they have to match the text (that’s editorial control), and also you need more than one of each type. If you have a ward full of 75 year old ladies and there are only 4 models it is a massive uncanny valley hit. The problem then balloons when you start building injuries like amputations into the equation. Very quickly something that is about bed management can become about details of a virtual patient. IN a fully integrated simulation of medical practice and hospital management that can happen, but the core of this project was the pressure of beds. i.e. in air traffic control terms we needed to land the planes, the type of plane was less important (though had a relevance still)

It is always easy to lose sight of the core learning or game objective with the details of what can be achieved in virtual environments. There is a time and cost to more content, to more code. However I think we managed to get a good balance with the release we have, and now can work on the tweaks to tidy it up and make it even better.

The application has also been of interest to schools. We had it on the Imperial College stand at the Bang education festival. I had to make an offline version of this. I was hoping to simply use Unity’s publish offline web version. This is supposed to remove the need to have any network connection or plugin validation. It never worked properly for me though. It always needed network. I am not sure if anyone else is having that problem, but don’t rely on it. That meant I then had to build standalone versions for mac and windows. Not a huge step but an extra set of targets to keep in line. I also had to hack the code a bit. Photon is good at doing “offline” and ignoring some of the elements but I was relying on a few things like how I generated the room name to help identify the scenario. In offline mode the room name is ignored and a generic name is put in place. Again quite understandable but cause me a a bit of offline rework that I probably could have avoided.

In order to make it a bit more accessible Dave wrote a new scenario with some funnier ailments. This is where were broke our base design principle and yes we put zombies in. I had the excellent Mixamo models and a free Gangnam style dance animation. Well it would have been silly not to put them in. So in the demos is the kids started to drift off the “special” button could get pushed and there they were.
Zombies
I have shared a bit of what it takes to build one of these environments. It has got easier, but that means the difficult things have hidden themselves elsewhere.

If you need to know more about this and similar projects application the official page for it is here

Obviously if you need something like this built, or talked about to people let me know 🙂

A busy week of sharing ideas

This week has been a very busy one of sharing ideas with all sorts of people in all sorts of places. It started Monday with the BCS Animation and Games Specialist group AGM, with a bit of the usual admin, making sure we are all happy with who is on the committee and inviting a new member to the fold. Then I got to do another presentation. As I had already done my Blended Learning with games one a few weeks before I talked about how close we might be to a holodeck. Mostly based on my article in Flush Magazine on the subject Afterwards though the discussion with some of the game development students turned into a tremendous ideas jam session. It was a very cool moment.
Tuesday I did a repeat visit (after 2 years) to BCS Birmingham for the evening and gave the ever evolving talk on getting tech into TV land for kids. I have to de-video the pitch and pop it onto slideshare as I use footage from The Cool Stuff Collective and I don’t have the media rights to put those bits up online. Something I am always asked about why? I don’t have a good answer as it is out of my hands.
Wednesday was a quick trip to Silicon Roundabout to talk funding and games and startups. Getting off the tube at Old Street and walking to ozone for a coffee based meeting felt like stepping into a very vibrant hub of startup and tech activity. Which it is of course.
Fancy coffee
Thursday was an early start and off to London for the BCS members convention. It was good to catch up with a lot of people I have met over the years through this professional body. It was very refreshing to see a definite fresh approach by the BCS with some great presentations, lots of tweeting and some interesting initiatives.
Lots of companies work with Bcs on computing as 4th science in uk schools #bcsmgcon
The most important was the work the BCS Academy has done (along with industry) to get Computer Science recognised by the department of education as the fourth science in the new syllabus. They really are going to start teaching programming from 5 years up! Result!
The day had already started well when I was honoured to receive an appreciation award from the BCS for the work I do as chair of Animation and Games. As I had done two talks the two days before this was very timely 🙂
A nice certificate. Much appreciated
Friday was a trip to help on the Imperial College London stand at the massive education science fair The Big Bang Exhibition.
Big Bang fair
Excel was buzzing with thousands of visitors.
Big Bang fair
There were tech zones, biology zones, 3d printers, thrust sec and even an entire maker zone.
Big Bang fair
The robot Rubiks Cube solver was there in all its glory too.
Big Bang fair
Space was well represented with real NASA astronauts chatting and drawing quite a crowd.
Big Bang fair
The Imperial College London stand was based around the medical side of things and on the corner of that stand was a Feeding Edge Ltd production 🙂
There will be a proper post on this but here is a link to the official line on the Multidisciplinary Major Incident Simulator
imperialhospitalScreenSnapz005

UnityScreenSnapz017
This unity3d and photon cloud solution (that I wrote the code for) allows for staff to practice a kind of air traffic control situation of dealing with a massive influx of patients. Having to shuffle beds and decide who goes where.
It was great being able to show this to a younger generation at the exhibition. Many came to the stand because they were interested in medicine. However a few came over to talk tech 🙂 They wanted to be programmers and build games. We had some good chats about minecraft too. Many of them appreciated the zombies that we threw into the configurable scenarios too :). If just a few % of the people who came to the fair get excited about science it will be worth it. It seemed though there was a new vibrant interest in all sciences and it is not as gloomy as it may appear sometimes.
Now it’s time to get my Dobok on and go and celebrate the opening of Hampshires first full time Choi Kwang Do facility and help get more people interested in the wonders of Choi. Phew!

A bit of SL fun then onto real CKD Virtual World Coaching

It has been a while since I just went for a wander in Second Life. I have very little building space on my islands now as they are pretty much totally rented out so I just have 1 corner or Hursley. So I thought as I was in there I would pop off and have a look around. It always helps to have a subject or a reason or something to search for, but I started off just looking at some art.
Kinesis sculpture
Though it then dawned on me I had not explored martial arts in Second Life for a very long while and I thought I would take a look with my new Choi Kwang Do enabled brain. (enabled by SouthCoast CKD 🙂 )
There are a fair few martial arts related places, groups etc. None specific to CKD though. I did check out an arena for more kung fu and weapon related battles at Colibri.
exploring SL martial arts
Then I thought it was time the CKD logo made its way into SL and so my little plot now has the start of a virtual Dojang.
trying some ckd in SL
I popped along to Abraminations, just like in the old days back on ’06 (is it really that long ago!) and checked out the fighting systems and animations. The closest was a kickboxing one.
abranimation
Then I shot this little video to see how off the animations are from CKD. The guard hand and stance and a lot of the moves are not as flowing as CKD but it shows an interesting potential to people not yet versed in virtual world tech and sports.

Now I am wondering about taking the kinect tracking and seeing if I can mocap that to my patterns for CKD and get the BVH file up into SL. Just so I could use a lot of acronyms 🙂 I know the skeleton format is going to be different but it is something to work on.
This is initially just a bit of fun, but…. as we know with projects like The Coaches Center we are getting closer to being able to enable hold gathering and meetings and share more insights.
Here I am sat in my personal coaches office, with Choi Terms on the board and a synchronized version of the kinect ckd test playing, the same view anyone would get if I invited them in.
Snapz Pro XScreenSnapz021
(You can also load videos and graphics etc onto various other boards in the room to share with people)
So there I am with a virtual presence, a shared space and all the tools available voice, text, imagery, avatar placement reaching out to the web to pull in other content. All in Unity3d 🙂 check is out and register in the beta at The Coaches Center

Cool isn’t it? Imagine being able to attend a class from anywhere for those time when you just can’t get to the Dojang, or for blackbelts and masters all over the world to connect and share their insights.

Free Unity3d for iOS and Android

I noticed a few tweets on the #unity3d hashtag about there being a free licence for the already free Unity3d for iOS and Android. Sure enough if you pop along to download Unity3d at the store you can add the ability to publish directly to iOS and Android. (offer expires april 8th 2012)
unity3d
I already have the iOS basic licence and unity3d installed so it was a little less clear what to do.
On the store page there is a licence upgrade link in there you have yo paste in your existing licence number then you are able to “buy” for free the upgrade you need.
Unity will then send you a new licence number that you reactive in the unity client with the menus unity/serial number and away you go you now have targeting for publishing to web, windows, mac, iOS and android all enabled.
What are you waiting for ? It is a brilliant dev environment.
NB. As I point out at conferences when I rave about it I don’t work for unity3d I just really like what they do 🙂

Virtual campus tours for students

I know that if I need to check out somewhere or something new I try and experience it in as many ways as the computer in front of me will allow. We used to just have to sit and paw through prospectuses for places before deciding to visit and check out a place (which is quite a commitment). This 3d virtual campus tours from Designing Digitally and my surnamesake Andrew Hughes (no relation that we have figured out yet) shows the very real benefits of this mirror world application.

This is clearly not there to remove the need to travel and visit somewhere, but to give you the interaction with both a representation of the physical space and the people that are already there to give you tours. So unlike a prospectus you get to talk to people. Why wouldn’t you use something like this?
So here this is an specific industry vertical (in this case education admissions) using game style technology and the benefits of metaverses to engage with people. This is start of it all coming of age.
Well done 🙂
There is a live demo to go and see, just bear in mind if you are form the UK that the our US cousins are more proud of their further education that we are (yes I know they pay but we seem destined on dismantling our system)

All the pieces are falling in place – Unity3d

Back in late 2009 I wrote a post saying we had all the pieces to start seriously building virtual world toolkits and environments from off the shelf pieces.
Today I have seen two releases arrive of new hosted toolkits and services, both using Unity3d and some of the other pieces related to hosting a server component. They are both from stalwarts of the virtual worlds industry and both with former close ties to Linden Lab and Second Life.

In not particular order the first is from SecondPlaces.net called Unifier. As with all Unity environment this benefits from the browser plugin to let it run on most browsers. According to the site its using smartfoxserver to broker the positions of the other avatars and flash voice for VOIP.
The key is that it acts both as a hosted service on Amazon EC2 (i.e. thats where the server will be running for your instance) or as a run yourself service. Smartfoxserver is a relatively simple java application to get going on a server with some config for ports etc needed. The important parts are it knowing what it needs to keep a track of an what clients subscribe too. There also appear to be lots of interaction with other content, whiteboards etc and the sort of dynamic tools needed to interact online. So a lot of work has gone into productising this.
The second offering is from Tipodean which is both a service to run peer to peer unity3d (which looks like it uses the unity3d master server) and an OpenSim to unity3d conversion service. So you can pay to get your build moved from prims to the mesh of unity3d and then have some unity3d polish applied to it.
It is not clear how dynamic any of the environments are, as typically, whilst unity3d can load new assests on the fly it is more complicated to set that up than the ability for people to walk around a fixed environment.
The upshot of all this is that there is more choice and scope for the market to grow. These join the other services out there and form part of an SL counter culture, in a slightly different way to the counter culture of Opensim.
It’s all good.

The power of the prefab -unity3d

I have been diving into some more unity3d development for some mini games for a gaming startup. Once again the subtle power of Unity3d shone through with what is in effect object inheritance.
If you create an object on the stage it is a single instance, copying and duplicating that makes new instances. These are completely unrelated with the exception of any materials or script references they have.
However, if in the project file area you do the magical “create prefab” and then drag the object from the stage back onto the prefab you have created it automagically becomes a class.

Now if you use the prefab object to populate the stage you end up with multiple instance of the same base object. Are all you objects too big? resize one on the stage and say “apply to prefab” it will ripple the change back to the base class and then back out to all the instances.
Each object can still have its own configuration. e.g. drop a rotation script on the prefab, they will all rotate on stage/scene.
However if you expose parameters on that script you can set the rotation speed individually for each of those objects. i.e. overide the defaults.
The visual nature of the objects and the scene and the visual application of changes is a great teaching tool for those people not yet quite grokking the whole class and instance thing in regular code.
It would be nice if when creating a new object in the scene you were prompted to make it a prefab and 9 times out of 10 that’s what you want to do, but thats a minor thing.

A powerful part of the gaming recipe – Graphics

This video is doing the rounds and was used at the FCVW conference showing the powerful rendering capabilities of Cryengine 2, Unity3d and Unreal engines.

I think of all the elements in this the live linking to Maya showing the boned and rigged wireframe being manipulated at the start starts to open up peoples appreciation of what goes into these engines.
Also with the fantastic level of graphic quality you need a fantastic level of modelling and texturing. Awesome lighting still needs to be placed on set in the right place. All this is also before you get to the layer of gameplay, the way that a multiplayer environment might synchronise its data. The rules of any game, the testing of the entire product, the sound design, music etc etc.
A graphics and physics middleware selection is important, and these render tests really show the potential, but there is a lot more.
Having said that though lots of the rest is more standard software engineering and IT architecture. It has to blend with the rich visuals of the user experience. Many techies have traditionally not been interested in the front end, its too obvious when it goes slightly wrong I think 🙂
Anyway enjoy the video, think under the graphics and see what amazing creations can be put together for work and play.

Browser to unity3d communication and back again

I have been doing a fair bit of unity3d recently again. In particular I have been looking at the ways it can take data and parameters. It’s all in the unity documentation but there were a few subtle things I missed first time around that I thought it worth sharing, as much for my memory as anything.
The first useful thing is the unity plugin is able to simply talk back to the browser and call javascript functions in the page. So in C# for example I am able to do this.

void Awake ()
{
Application.ExternalCall( "callbackunity", "The game says hello!" );
}

Where the page that the unity3d is served from has

<script type=”text/javascript”>
<!–
function callbackunity(arg)
{
alert(arg);
}
–>


Obviously you can do more than just an alert so I looked at what I can send back into unity3d from the page and started to do this in the calling page.

<script type=”text/javascript”>
<!–
function callbackunity(arg)
{
alert(arg);
var unity = unityObject.getObjectById(“unityPlayer”);
unity.SendMessage(“Rezzer”, “inboundfunction”, “item1~item2~item3”);
}
–>
</script>

This gets the unity plugin module and calls a function called inboundfunction that is attached to a unity game object called Rezzer. It is only able to pass a single value so I send a string with a known seperator ~ to be able to send more than one piece of data.
So the flow is that unity loads, awakes and then makes a callback to the page which then injects the data into unity.
On the unity side I have this sort of function

void inboundfunction(string indata)
{

string[] words = indata.Split('~');

data1 = words[0];
moredata2 = words[1];
anotherpiece3 = words[2];

}

At first this all did not work (of course). I thought I had not put the right script on the right object as I was getting a callback to the page but not injecting the data into the unity object.
This turned out to be something quite simple in the end. The unity docs example shows var unity = unityObject.getObjectById(“UnityContent”); However the page I got generated out of unity3d that I used to then add my code to used a different label for the Unity plugin in the setup. It called it “unityPlayer”. So my failing code was because the js in the webpage was not picking up the right object. As we know computers need to have everything exact.
This was almost code blindness. I was thinking it was getting the unity object, of course it was how could it do anything other, but its an obvious school boy error “UnityContent” <> “unityPlayer” 🙂
Once that little bug was sorted out it was all plain sailing. The parameters I pass as item 1,2 3 etc are generated by PHP from Drupal that embeds the unity3d. So I can send anything to the unity based on which page is being served and by whom.
One of the other things I do though is use direct WWW communication to a server from inside the unity3d. This initial set up code is to establish some start parameters, once running communication is not via the browser, but a hotline to server instead.
That all just works as documented, though you have to make sure you are serving from and talking to the same server or dealing with the cross domain policy documents that help protect web apps from rogue applications in browsers.
This is all very basic code really, but if you are not from a web world it can seem a little unusual.
e.g. in .cs in unity3d

IEnumerator sendIt ()
{
// Create a form object
WWWForm form = new WWWForm ();
// set up fields
form.AddField ("data1", "some data");
form.AddField ("userscore", score );

// Create a download object
WWW download = new WWW ("http://someurltohandlethings", form);

// Wait until the download is done
yield return download;

if (download.error != null) {
Debug.Log (download.error);
print ("Error downloading: " + download.error);

} else {
print ("Sent");

}

}

Due to .cs needing to multi thread this is an enumerated function which means you have to call it like this when you want to send anything

StartCoroutine (sendIt ());

As most of my quick unity3d pieces had been in .js this StartCoroutine was not as obvious, though it is in the Unity docs.
The URL I call has some PHP that gets at the form elements, but we are in normal web territory then.

$somedata = $_POST["data1"];
$somescore = $_POST["userscore"];

Of course all the error handling and data handling can (and is) much more elegant but this all seems to work very nicely and the core of the communication I am able to drop into anywhere.

My series 1 TV showreel

I got to hang out in the edit suites at MTV Camden today where the magic of TV gets crafted together from all our bits of film. John was making a showreel for the entire show to take to New York and the Kidscreen summit a version of that will get posted soon.
Afterwards though I got to suggest some pieces for my own personal showreel. This is all very exciting as much of the official website doesn’t work outside the UK. We stuck to pieces of series 1 (apart from the end as a taster of the current series). It keeps the comedy spirit and shows a range of things. My acting is getting better I promise over time 😉

See what you think.
Dont forget The Cool Stuff Collective ITV1 9:00am Saturdays