Monthly Archives: November 2014


Unity 4.6 – A new UI

I have always really enjoyed developing in Unity3D as you may have noticed from my various posts and projects. Watching it evolve into such a major platform has been really interesting too. Yesterday Unity moved form version 4.5 to 4.6. Now in most systems a minor point release is not big deal. In fact the whole patch and push versioning with lack of backwards compatibility that bedevils the tech industry is something I find I have to battle with on a regular basis. Do a change in version numbers… no thanks. Except this is the exception to prove the rule.
Unity3d is great at allowing a developer to start placing objects in an environment, attaching behaviours and code to those objects. A lot can be done with point and click, drag and drop but there is also room for real lines of code and proper software engineering. In fact that is essential. I can however, with ease, make a 3d character animated and moving around, colliding with scenery etc. Up to now though if I wanted to use any other form of user input with a Graphical User Interface I had to really go back in time.
The GUI creation in 4.5 and since the earlier releases has been awful. In code in the OnGUI loop you had to try and describe layouts and positions where much of what you put was implied. So you have to describe the user interface and its behaviour with code that merges function with display whilst not being able to see where things are until you actually run it. This is the opposite of most of Unity3d which lets you see where things are in the development environment both when running and when not.
I have lost track of the number of times I have tried a “fancy” layout of buttons, one that starts nicely, with flowing positions based on screen resolution, only to get lost in the code and resort to fixed x,y positions which generally a hit and miss for the first few times. In Unity3d you can expose parameters to allow compound prefab objects to be configured without code changes. I have ended up with UI helper objects that change my spacing and positions at runtime, so I can tweak the numbers to try and shuffle buttons around. Unity3d is great at letting you move alter runtime values, and also great at going back to the originals when you stop running. Unfortunately this works against you on UI’s. You have to tweak the values then remember to write them down on a piece of paper before hit stop so that you can apply them in the code for next time once you get it ‘right’.
That may or may not make sense, but take my word for it, it was a right pain. Let alone what you then had to do to try and alter the look of the buttons, sliders etc with the GUI.Skin which is like CSS but much much worse. So all my research projects have had plain buttons. I am not a graphic designer, and I was not easily able to explain the UI process or how to work with designers to make is nicer.
All that is now gone though. The old way still works in 4.6 (which is a relief from a phased improvement position) but a new shiny UI creation tool and framework is now at last public. It has long been promised, by long I mean literally years! I am sure when I get to the depth of it there will be a few things that cause grief, but that programming folks!.
Now the UI exists as a real type of Unity3d visual object. If you create a layout area you can see it, a button its there all the time. Each of the objects is part of a proper event system but also many of the normal functions are exposed as parameters. If you want a button to do something when pushed you tell it in its setup which function to call.
Previous UI buttons were only ever placed on the view rectangle, like a Heads Up Display (HUD). I have often needed thing to be interactive element in a 3d environment but to act like buttons. Again I have worked around this but now a UI canvas and all its actions, roll overs etc can be detached as a hud and placed into the environment. In my hospital environment this means I can have an better AR style UI at each bed. I have that at the moment but it does not have all the nice feedback and customisation a UI element can have.
UnityScreenSnapz017
The other major change is how the UI elements can be typed and hence changed. Each element can be a regular button, slider etc but they can also be a graphic or even the result of another camera rendering a live texture.
Here is a the game view (though not running) of the demo UI. The continue button is highlighted so on the right are its properties, events etc. The mini menu is showing the choices just for that button on how to transition on normal events such as rollover and click. Here it is set to animation, hence it uses mechanim and potential can go flying around the screen twirling. Simpler transitions such as just a colour tint or a sprite swap can also be defined.

This was possible before but again it was a lot of messing around. The old system of what to do on a roll over etc was buried in the skin code or overridden in the onGUI. Now it makes use of the regular mechanim animation system. This state machine allows definition of all sort of things, and it how we make character jump and dive into rolls, duck etc. It makes sense to have that for more complex UI transitions and states. In multiplayer environments I use a mechanim state change value to send over the network to make sure any mirrored/ghosted network object acts in the same way. So I guess now I will be able to use that for UI’s too to keep players in synch with certain modal activity.
Anyway, this 4.6 and new UI is obviously a precursor to the much bigger Unity 5.0 coming very soon. However it has dropped at the right time for the next phase of a project, so I get to use it in anger and re-engineer something that just about works into something much better.
As I tweeted, this is a great xmas gift, than you Unity 🙂

Landing on a Comet ! Wow!

Yesterday I sat glued to the internet video stream from ESA HQ in Germany tensely waiting to hear, via the orbiter Rosetta that the lander Philae had made it to the surface. When it did and the cheers started to ripple around the feed I had a definite lump in my throat and tear in my eye. I was watching it with predlet 2.0. He had asked if he could play Minecraft and for once I said No, watch this!. Of course without the history behind it all it was just a few people sitting looking at a screen, no commentary. So tried to explain the enormity of what we were waiting for. The fact the probe launched (10 years ago) before he was born is pretty amazing and the fact that the project started 20 years equally so. Though that makes less impact. I think my enthusiastic cheer when it landed probably got it across to him though.
It is truly awe inspiring that people have gathered together across countries to put together a massive project to land a tiny machine on a comet 300 million miles away in order to then perform some experiments and determine what the makeup of the comet is.
It transcends all the ridiculous austerity measures and the greed of those with the worlds wealth. This helps the human race and give some sense of purpose. We need more of this!
This image came from the surface of the comet!
Image on a coment
So we watched this on a 70mbps fibre optic internet stream, live via a web browser on a PS4 games console, whilst also second screening BBC news in iplayer on a small touchscreen wifi enabled device (iPhone), whilst also follow the tweets from the landers account too. I was sharing it with my son in person but also with my friends around the world who were also tuned in. 10 years ago when the probe launched none of the ways we engaged with the landing existed in any mainstream form.
I am sure there are some naysayers who will argue we need to sort this planet out first, but I think we need to sort this planet out as well! This sort of project provides hope and unity. It inspires generations (just as the moon landings did to me when I was just a toddler). That makes it a great investment. We learn lots about our past, help figure out our future but also the will and the passion can be turned back to earth, as log as we don’t let commercial greed take over.
What an awesome experience, and I hope, despite the 3 bounces that the lander continues to fulfil it’s mission. Though as the scientists have been saying when asked what happens now, the answer is, we don’t know, that why we do this sort of science to find out.

Some fun with space science and physics

Last week on the excellent Prof Brian Cox show Human Universe TV show a wonderful real life experiment was shown. It is common at school to learn about gravity, about mass and weight. Often the original concepts from Galileo around dropping objects are covered and considered. We now see footage from space and weightlessness.
However in this experiment a giant vacuum hall is used to show the difference air resistance makes. Being told a heavy thing like a bowling ball or a light thing like a bunch of feathers we are taught from our own observations that the ball will drop faster. In this clip though the air is removed from the environment and we get to see the ball and feathers drop at the same rate. It is spooky, despite knowing the science. You can see from the scientists reaction how special it is to see this. It is real life magic at work.

I have shown the predlets and they are bemused and intrigued too.
I have always wanted to the predlets to feel the wonder of science. We spend a lot of time on screens looking at the virtual. This is all great, but the real thing can be even more amazing.
We just took delivery of a proper telescope this week to be able to check out the night sky. This was in part from being at a friends and all the kids wanting to see the moon close up with their scope, in part because of the wonder at the size of the universe playing space games on Oculus Rift like Elite Dangerous and the influence of TV shows like Human Universe.
I remember looking up in awe at all the constellations and planets, and of course the moon when I was about 7 and armed with a pair of binoculars.
This box arrived from Amazon

After a little bit of assembly it looked like this.

The Equatorial mount on it is, if you are used to 3d applications, really interesting. 3 axis of movement but up is not always up :). You get this in 3d when your rotations are relative not absolute.
There are lots of things to consider setting it up, but just with the basics I pointed it at the moon for us last night. We used the 20mm lens that came with it. We got a pretty full frame moon.
Taking a photo was tricky as I did no have a camera mount, and only the iPhone. However jiggling it around a bit I ended up with this

The predlets we somewhat agast seeing the craters in this close. Obviously they have seen photos before but seeing it yourself certainly does something different. It did when I was a kid so fingers crossed.
Later I spent some time exploring how to line the thing up, adjust the right bits and pieces and there is lots more to explore 🙂
Also as a side effect, as kids always like to play with the boxes of things, we ended up with a lot of cardboard just on the day school asked for some to be brought in for a makefest that they are doing in years 3/4 🙂
TV/internet/games etc might be considered sedentary and lazy, but they can be used to inspire and explore and raise awareness.