EyeBlend better than an Apple Vision Pro – still

In 2016 I wrote Cont3xt the follow up to Reconfigure and part of that was an upgrade our lead character gets. She has access to an API that can manipulate the physical world like the virtual, she builds that up into a Unity application for her smart phone in the first book. The second she gets an EyeBlend headset, state of the art (and still just a little ahead of Apple and co’s attempts. So she ports the Unity app and upgrades the messaging system for this high bandwidth device and to enhance the experience. Rather like all the developers are doing right at the moment. Life imitates art 🙂 Read Reconfigure first, but this might be a topical time to check out Cont3xt whilst you wait for the Apple demo in a local store next year. (Just £0.99 each for these adventures 🙂 )

“An EyeBlend? They are not on general release yet? How did you get that?” Her eyes lit up wildly as Roisin took in the news. 

“Oh you know, I have a few friends who hack for a living.” Alex replied with an air of comedy superiority. 

“You sure do! The EyeBlend is an awesome piece of kit. It does full blended reality applications. It uses field effect projection with live object tracking!” She said enthusiastically. “They call it a hologram, though it really isn’t as such. You can forgive that though as the effect is the same, apparently.” Roisin was on her tech turf again. 

Unity Networking – Techie info

I have been diving into Unity3d multiplayer again. Whilst previous attempts I was using a socket server I had ignored the simplicity of the Peer to Peer networking that Unity can do to make life much easier to prototype things.
The way this works is that unity3d provide a broker a.k.a a Master Server which acts as a lobby to allow a network game to show up and a client to connect to it. Once that connection is established through some very simple properties and through the use of RPC (Remote Procedure Calls) you are able to make things happen on each client.
For a more robust MMO solution there are things like Photon however to get into what it means to create a distributed system its worth just using the basics to start.
I am not going to go over the lobby essentials as that is in various other network tutorials but just wanted to share the simplicity of the network synchronisation.
When each person gets their version of the unity scene for the game the one that initiated the network connection is regarded as the server all the others that connect are clients, but they are all running the same code.
This is where some confusion comes in, but Network.isServer is a property that helps you decide on whose the boss.
Any of the clients or server is able to create an object with a special version of “Instantiate”. This is used to create an instance of a fully functioning object (or prefab) in a scene. If that prefab is bestowed with the special component Network View is is then able to be created with Network.Instantiate. The component is at the bottom of this picture. How the data flows and which property to inspect is configurable.

What that does is make the object appear on the creating client, but it also messages all the other clients(and server) to create the same prefab (as they are running the same code anyway). Each of these objects is able to have a property or set of properties to then keep updating. Typically this is something like the objects position and orientation (transform). If the owner of the object changes where it is, the changes are rippled to each of the other clients without any further effort. Magic.
This helps provide some interesting design choices in the distribution of responsibility. I was building a generic playing card deck and I started with each client being responsible for instantiating their cards as that is how it feels it should work. However it quickly became more obvious that the one client (i.e. the Server) should be in control of the deck. This sounds obvious, but maybe it is isn’t always the case. The Server instantiates all cards and those ripple to all clients (including itself). Ownership of the cards for the game is then maintained by my own collections, despite the attraction of .isMine property on objects. Now other objects and types of interaction will be better distributed more equally. Either way you end up having to send other messages back and forth. Whilst unity3d will keep track of transforms your have other data to send and synchronise.
Once an object has a network instance you can call functions on scripts on those objects wherever they are on which ever client using networkView.RPC().
This lets you specify the function name to call the values to send and more interestingly an RPC mode. This means that you can call the function on all networks instances or every other network instance other than this one clients version etc.
Functions are made available on the script with a simple extra tag in c# (works in .js in a similar way)
void setvalue( int myval) {
cardval = myval;


So before you know it you have state synchronised objects across a network and you can concentrate on what and why they exists. It is certainly somewhere to start if you are not used to these sort of distributed problems.

Look just how simple things can be. Unity3d accessing the web

I have been doing a lot of Unity3d with data flying around, but I am still amazed at just how easy it is to get things to work.
For the non-techies out there this should still make sense as its all drag and drop and a little bit of text.
So you need a web based walk around some of your flickr pictures?
Unity3d makes it very easy to say “create scene”, “add terrain”, “add light”, “add first person walker (the thing that lets you move around” and finally to say “create cube”.
It is as simple as any other package to create things, just like adding a graph in a spreadsheet, or rezzing cubes in Second Life.
Once the Scene is created is it very easy to add a behaviour to an object in this case this script is types in and dragged and dropped onto each cube in the scene. (its in the help text too)

var url = "http://farm3.static.flickr.com/2656/4250994754_6b071014d4_s.jpg";
function Start () {
// Start a download of the given URL
var www : WWW = new WWW (url);

// Wait for download to complete
yield www;

// assign texture
renderer.material.mainTexture = www.texture;

Basically you tell it a URL of an image, 3 lines of code later that image is on the surface of your object running live.
The other thing that is easy to do is drag a predefined behaviour of “DragRigidBody” on the cube(s). Then you get the ability to move the cubes when they are running by holding onto them with the mouse and left click.

Now the other clever piece (for the techies and programmers amongst us) is that you can create an object of collection of objects and bring them out of the scene and create a “prefab”. This prefab means that you can create lots of live versions of an object in the scene. However if you need to change them, add a new behaviour etc you simply change the root prefab and the changes are inherited by all the prefabs in the scene. You are also able to override settings on each prefab.
So I have a cube prefab, with the “Go get an image” script on it.
I drag a few of those into the scene and for each one I can individually set the URL they are going to use. All good Object Orientated stuff.
This is not supposed to be a state of the art game :), but you can see how the drag and drop works and moving around using live images I drag in from my Flickr in this sample
Click in the view and W and A moves back and forwards, left click and hold the mouse over a block to move it around.
Downward gravity is reduced (just a number) and of course the URL’s could be entered live and change on the fly. I only used the square thumbnail URLs form my Flickr photos so they are not hi quality 🙂
This is also not a multiuser example (busy doing lots of that with Smartfox server at the moment) but it is just so accessible to be able to make and publish ideas.
The code is less complicated that the HTML source of this page I would suggest. Its also free to do!

Virtual World/Augmented Reality Olympics, World Cup?

In a report just published by EngageExpo on the future of virtual goods there is an interesting comment by Tom Hale the Chief Product Officer at Linden Lab (Of Second Life fame) that says.
“Continue to see large brands experiment with engagement based investment to maximize exposure during zeitgeist events, for example the Olympics.”

Now I am hoping that it really does get taken seriously by the organizing bodies. The ways of representing sporting events are open for massive innovation.
I wrote that here in April and here and here in 2008 on eightbar and have proved the potential with the Wimbledon work since 2006 in Second Life.
I also received an invite to a conference (via Layar) for the M-Football conference whose aim is to ensure that the 2010 world cup (soccer) acts as a catalysts for mobile applications (inluding augmented reality ones).
Now with 2 years to go it really is time for the UK in particular to wake up and start doing things. It wont be enough to throw some things into Second Life or a quick Facebook app on there 2 mins before the event.
We are all here to help and to work that direction. We can indeed make this the best Olympics ever.
Wimbledon/World Cup mashup bleak times
Back in 2006 when England went out of the World cup Yossarian and I shared the experience enhanced with Second Life as a back channel at the prototype Wimbledon. It was simple, effective and very memorable (and I had Avatared up as Sven (The coach).
Head in hands as we crashed out.
Imagine what we can all do if we actually plan this !
Apologies to any agencies working on this in earnest already. I do get the feeling we have not really started yet though.

This looks very promising – A World For Us Unity3d meeting rooms

Henri just commented on my previous blog post about Unity3d and Opensim. It would appear we have an interesting to player in the game now.
A World For Us have produced a web based virtual world meeting application using, amongst other things, Unity3d for as the engine. I had a very quick look and got a good initial guy feel. Partly this is because I know and appreciate what Unity can do. Whilst there will be lots of questions (to be answered in depth later) about whitboards, web sharing, physics and dynamic interaction etc, this just worked. Voice worked, chat worked, I did some avatar customization (no green hair though 🙁 ) I sat in a chair, I zoomed into a presentation and moved through it. Most of all it ran on my Mac! A key point about Unity is that it does not care quite so much as many of the windows based plugin’s.

Picture from A World For Us
A world for us

We have all the pieces – Unity3d, Opensim, Evolver, Smartfox

I have now had this conversation several times with people about the potential future that a mix of open source and open minded development may bring to the virtual world industry. Much of what is happening seems to be driven by some of the direction Second Life has taken or is taking, though not so much to follow it into corporate lockdown but to breakout and provide the flexibility and creativity that is needed for the next generation of virtual worlds.
In the early days of 2006 many of us said it would be great to be able to run a Second Life server, our own one under out control. That has taken a while to start to emerge, but it has emerged as an expensive product aimed at corporate IT departments. Luckily the opensource community had rallied and created the excellent Opensim. This ticks all the boxes of being able to be run locally, be run in the cloud, be provided as a service. So we have an extensible virtual world server ready to be built upon.
The other component missing was a more controllable and rich interface. Yes there is the Second Life Snowglobe open source client but the need to certify and lockdown variants to align with the product needs for Second Life means that lots of the flexibility is lost. Likewise the initial open source Linden based client was under a GPL licence which caused all sorts of development to not happen at the time it really could have done with it.
This is where Unity3d steps in I believe. It was Rob Smart who first started to show me this way back. Unity3d is a great front end, very flexible in how you build games and content for it.
This was a movie form back in September 2008, using a message from Second Life to several unity clients to create a cube. This is loose integration, telling one place something has happened and letting the other place get on with it.

Unity3d has a plugin architecture too. It runs in a browser or deploys to application platforms like mac and windows. The visuals can be made very good very quickly too. Unity3d needs a server of some sort to operate as a multiuser platform (though it does do some peer 2 peer) hence applications like Smartfox are ideal for producing Unity multiplayer and MMO style games.
However Opensim has all the other layers of things needed to maintain a virtual world. It has assets databases, chat, positional awareness, server side scripting (as does Second Life that is was originally based on of course).
So we have an extensible and easy to get hold of Unity3D client engine, and extensible and easy to get hold of Server/Persistence VW engine in Opensim. There may well be challenges in making the two understand one another but with the flexibility both sides of the equation that makes them very solvable. This is a high level view, Rob has some more detail here on the challenges. Add in some interoperation definition with Vastpark to help bind the two and make some mappings.
Throw into the mix an open minded avatar wizard such as Evolver. There we can build avatars that we know definitely can be dropped into Unity3d.
So…..I create an Evolver avatar, dropped as a resource bundle into a web deployed Unity3d client that tells the opensim server where I am in the coordinate system, and which bundle I am using. Other people with a Unity3d client see the rich detailed avatar and the shiny Unity3d environment. However we do not have to stick to that one client. Other people using a Second Life style client see the Second Life style rendering of the world?
This is already happening in some respects, the Iphone application Touch Life lets you logon to the public Second Life. In a sort of bugblatter beast of traal moment everyone can see you, but you cant see them. You navigate your avatar around the map, have full chat, inventory and economy access, but a very different view of the world to everyone else. (Of course Unity3d runs well on an Iphone too, so imagine that as an extension to Touch Life?)
Once there is an acceptance that there can be more that one view of the data, one where people without the full equipment can still see what is going on and participate things get a lot easier to consider. Whilst a gaming assumption tends to be we all need the same view at the same speed in order to be able to have balanced gameplay (lag gets you killed) in collaborative spaces, education, meetings and art galleries this is less of an issue.
As the parts of the jigsaw come together over the next year the ability to have the same experience will re-emerge.

Evolver Avatar on the Iphone via Unity3d

It has been a busy week for travel and meetings this week. However in the gaps between pieces of work I have been exploring what I can do quickly and simply with the Iphone 3GS. I had been looking at the various 3d toolkits, doing model import by hand.
In the end it struck me as obvious to use unity3d iphone as this just takes away the underlying hassle.
Using unity3d it was very easy to bring things in such as my Evolver 3d model into an app and make it draggable and rotatable using touch. This took approximately 10 minutes to do. I spent more time wondering about the various particle effect parameters than anything else.
Iphone evolver avatar
Screenshot 2009.10.23 08.12.27
Obviously you can do a whole lot more, but it is a massive leap in productivity. It does not require any other strange contortions or going via windows in order to get a good 3d model in.
There are a few quirks in the tool chain but things like the Unity Remote application that lets you see whats going on via your Mac and streaming media from the iphone are great. (Hint if you cant get unity remote to compile and deploy just grab a version from the iphone store (lost a few minutes thinking about that!)
I know the hardcore techs will be saying you can build all this from the various bits, cocos, SIO2 etc but I have had to do that one to many times and I want to explore the ideas for applications rather than the guts of the platform.
Just having this in my hands and using it has created rather a long list of “I wonder if…” apps to create. Including using the touch interface to control some Opensim objects.

Flocking Brilliant (fill in own joke here)

In my exploration of what unity3d does and how easy it can be to get things going I was about to set upon implementing the Craig Reynolds flocking code in the environment. I have done a fair few things related to using this flocking algorithm. Not all were published publicly, though I did about 4 years back create an organizational structure viewer where on selecting a person their related people would flock around them so that you could navigate through a organization based on relationships and drag flocks around.
The principle of the flocking code is very simple. Each individual in a flock has a motivation to aim for a destination, but to back from a collision with any other member of the flock. The code used is pretty much the same everywhere based on Craig originals. Flocking can provide some dynamic insight into relationships between objects, in particular when more than one flock, or the focus of the flock can be changed.
Before I set about my coding example I had a look around, and not surprisingly I found that the Unify community wiki had already done the work needed in both C# and JS.

Once set in motion the boids (as they are referred to everywhere) a cast out into the world. They then try and reach a point at the centre of the flock, never bumping into one another, as they get close they back off and then try again. An element of random direction provides an interesting display. Bear in mind these are simply acting under a simple rule, the path and shape of the flock is just emergent from those simple rules.

I have simply published a version of that on this page (it will do the plugin download for you) as you need to see the beauty in both the complexity and simplicity of movement.
As you can see a still just does not do it justice.
So now I have this I can see what I can apply it to. (Flickr Flock of Photo maybe? )

A quick study with Unity3d, dropping things.

I have been looking into Unity3d a little more recently. As a programmer since I was 14 (thats a long while ago) I tend to look at languages and toolkits with a sort of skim pattern approach. I look for tiny pieces of flexibility knowing that if they exist then a lot more can be combined with more effort and time to produce the required results.
Unity3d has this flexibility in so many ways, as well as providing a whole host of useful tools to help create things on a grand scale.
I had already explored being able to get image resources from the web from places like Flickr, once that inbound route is possible then the web can be used for all sorts of triggers to an application. However I wanted to explore the physics and character animation potential.
I am not a graphic designer, nor an expert in high end 3d packages but I do know a little of what I need to do. I started to use the excellent Cheetah3d on the Mac as it is low cost, suites my needs and works well with Unity natively.
Whilst at the airport I built a little person, and also rigged it with bones and joints. The only problem I had was having created the individual prims I could not see how to get them to become one mesh. It turns out you have to use “Import Children” in Cheetah3d. However having done that it leaves copies of the children in the heirarchy, which is helpful but confused me as I thought it was not working. Once that was done though we were all systems go.
Telling Unity3d about the figure was very simple, a drag and drop in fact.
Once in Unity3d it was easy to create a terrain, select some foliage, have it autiomatically populat the terrain with that foliage. I then used the ragdoll wizard to tell Unity3d about the joints and parts on the Cheetah3d figure.
The rest was a few clicks to see what worked.
This example is not “interactive” as such, but just a little example for my own benefit, but I liked the result so I thought I would share it 🙂

****Update I just added  little bounce button to throw the things back up in the air again (once they have landed). This is doing a publish of a message to a set of listeners. once again a nice implementation. Of course all the ragdoll physics may make it a little clunky for now real reason, but optimization is another subject:)

You can launch the example here it should do all the plugin checking for you I hope