Unity 4.6 – A new UI

I have always really enjoyed developing in Unity3D as you may have noticed from my various posts and projects. Watching it evolve into such a major platform has been really interesting too. Yesterday Unity moved form version 4.5 to 4.6. Now in most systems a minor point release is not big deal. In fact the whole patch and push versioning with lack of backwards compatibility that bedevils the tech industry is something I find I have to battle with on a regular basis. Do a change in version numbers… no thanks. Except this is the exception to prove the rule.
Unity3d is great at allowing a developer to start placing objects in an environment, attaching behaviours and code to those objects. A lot can be done with point and click, drag and drop but there is also room for real lines of code and proper software engineering. In fact that is essential. I can however, with ease, make a 3d character animated and moving around, colliding with scenery etc. Up to now though if I wanted to use any other form of user input with a Graphical User Interface I had to really go back in time.
The GUI creation in 4.5 and since the earlier releases has been awful. In code in the OnGUI loop you had to try and describe layouts and positions where much of what you put was implied. So you have to describe the user interface and its behaviour with code that merges function with display whilst not being able to see where things are until you actually run it. This is the opposite of most of Unity3d which lets you see where things are in the development environment both when running and when not.
I have lost track of the number of times I have tried a “fancy” layout of buttons, one that starts nicely, with flowing positions based on screen resolution, only to get lost in the code and resort to fixed x,y positions which generally a hit and miss for the first few times. In Unity3d you can expose parameters to allow compound prefab objects to be configured without code changes. I have ended up with UI helper objects that change my spacing and positions at runtime, so I can tweak the numbers to try and shuffle buttons around. Unity3d is great at letting you move alter runtime values, and also great at going back to the originals when you stop running. Unfortunately this works against you on UI’s. You have to tweak the values then remember to write them down on a piece of paper before hit stop so that you can apply them in the code for next time once you get it ‘right’.
That may or may not make sense, but take my word for it, it was a right pain. Let alone what you then had to do to try and alter the look of the buttons, sliders etc with the GUI.Skin which is like CSS but much much worse. So all my research projects have had plain buttons. I am not a graphic designer, and I was not easily able to explain the UI process or how to work with designers to make is nicer.
All that is now gone though. The old way still works in 4.6 (which is a relief from a phased improvement position) but a new shiny UI creation tool and framework is now at last public. It has long been promised, by long I mean literally years! I am sure when I get to the depth of it there will be a few things that cause grief, but that programming folks!.
Now the UI exists as a real type of Unity3d visual object. If you create a layout area you can see it, a button its there all the time. Each of the objects is part of a proper event system but also many of the normal functions are exposed as parameters. If you want a button to do something when pushed you tell it in its setup which function to call.
Previous UI buttons were only ever placed on the view rectangle, like a Heads Up Display (HUD). I have often needed thing to be interactive element in a 3d environment but to act like buttons. Again I have worked around this but now a UI canvas and all its actions, roll overs etc can be detached as a hud and placed into the environment. In my hospital environment this means I can have an better AR style UI at each bed. I have that at the moment but it does not have all the nice feedback and customisation a UI element can have.
UnityScreenSnapz017
The other major change is how the UI elements can be typed and hence changed. Each element can be a regular button, slider etc but they can also be a graphic or even the result of another camera rendering a live texture.
Here is a the game view (though not running) of the demo UI. The continue button is highlighted so on the right are its properties, events etc. The mini menu is showing the choices just for that button on how to transition on normal events such as rollover and click. Here it is set to animation, hence it uses mechanim and potential can go flying around the screen twirling. Simpler transitions such as just a colour tint or a sprite swap can also be defined.

This was possible before but again it was a lot of messing around. The old system of what to do on a roll over etc was buried in the skin code or overridden in the onGUI. Now it makes use of the regular mechanim animation system. This state machine allows definition of all sort of things, and it how we make character jump and dive into rolls, duck etc. It makes sense to have that for more complex UI transitions and states. In multiplayer environments I use a mechanim state change value to send over the network to make sure any mirrored/ghosted network object acts in the same way. So I guess now I will be able to use that for UI’s too to keep players in synch with certain modal activity.
Anyway, this 4.6 and new UI is obviously a precursor to the much bigger Unity 5.0 coming very soon. However it has dropped at the right time for the next phase of a project, so I get to use it in anger and re-engineer something that just about works into something much better.
As I tweeted, this is a great xmas gift, than you Unity 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *

qA1IlAzSA M4

Please type the text above:

This site uses Akismet to reduce spam. Learn how your comment data is processed.