opensim


Minecraft mirror worlds

Way back in 2006 when many of us started to use the persistent multi user networked virtual world with self created content (Second Life) in all sort so of place like business and education we thought we were on the cusp of a virtual world revolution. As a metaverse evangelist in a large corporation it was often an uphill struggle to persuade people that communicating in what looked like a game environment was valuable and worthwhile. It is of course valuable and worthwhile but not everyone see that straight away. In fact some people went out of their way to stop virtual environments and the people that supported and pioneered their use. Being a tech evangelist means patiently putting up with the same non arguments and helping people get to the right decision. Some people of course just can’t get past their pride, but you can’t win them all.
For me and my fellow renegades it started small in Second Life

I also captured many of the events over the course that first year in a larger set along with loads of blog posts on eightbar including this my first public post that always brings back a lot of memories of the time and place. It was a risky thing to post (if you are worried about career prospects and job security) to publicly post about something that you know is interesting and important but that is not totally supported.

One of the many ways people came to terms with virtual worlds was in the creation of mirror worlds. We all do it. In a world of infinite digital possibilities we ground ourselves by remaking out home, our office our locale. We have meeting rooms with seats and screens for powerpoint. This is not wrong. As I wrote in 2008 when we had a massive virtual worlds conference in London

It seems we are seeing many more mirror world applications appear once more and generally the builds are in Minecraft. It is interesting Minecraft has all the attributes of more detailed virtual worlds like Second Life and Opensim. It is a persistent virtual environment. It has the 8 bit block look and much less detailed and animated avatars, but it does have a very easy approach to building. You stack blocks. You still have to navigate the 3d space get you first person view in the right place and drop a block. (Many of the objects in 2006 were from people who felt they could not navigate 3d space easily something I think that has started to melt away as an objection). The relative resolution of a build in minecraft is low, you are not building with pixels but with large textured blocks. This does have a great levelling effect though. The richer environments require some other 3d and texturing skills to make things that look good. Minecraft is often compared to Lego. In Lego you can make very clever constructions but they still look like Lego. This becomes much less of an visual design challenge for those people less skilled in that art. A reduced pallet and single block type means not having to understand how to twist graphic primitives, upload textures and consider lighting maps etc. It is a very direct and instant medium. Minecraft has blocks that act as switches and wires that allows the creation of devices that react, but it does not have the feature that I used the most in Second Life and Opensim of being able to write code in the objects. There are ways to write code for Minecraft mods but it is not as instant as the sort of code scripts in objects in more advance virtual environments.Those scripts alter the world, respond to it push data to the outside and pull it back to the inside. It allows for a more blended interaction with the rest of the physical and digital world. Preserving state, creating user interfaces etc. It is all stuff that can be done with toolsets like Unity3d and Unreal Engine etc, but those are full dev environments. Scripting is an important element unless you are just creating a static exhibit for people to interact in. Minecraft also lack many of the human to human communication channels. It does not have voice chat by default though it is side loaded in some platforms. It has text chat but it is very console based and not accessible to many people. The social virtual worlds thrive on not just positional communication (as in Minecraft you can stand near something or someone) but on other verbal and non verbal communication.
The popularity of Minecraft has led to some institutions using it to create or wish to create mirror worlds. Already there was the the Ordanance Survey UK creation which “is believed to be the biggest Minecraft map made using real-world geographic data.” (Claiming firsts for things was a big thing back in 2006/7 for virtual worlds). It has just been updated to include houses not just terrain. By houses though this is not a full recreation of your house to walk into but a block showing its position and boundary. This map uses 83 billion blocks each at showing 25m of the UK, according to a story by the BBC
The BBC also reported this week on the British Museum looking at recreating all its exhibits and its buildings in Minecraft. This is an unusual approach to their exhibits. It will be an interesting build and they are asking the public to help. So once again Minecraft’s simplicity and accessibility will allow anyone to create the artefacts. It will, however, create a very low rez low texture experience. So it is a mirror world, in that it will be like the museum but it is more of a model village approach. You admire the talent to make it in the medium. It seems more of an art project than a virtual world project. It is all good though 🙂
The BBC seem to be on a Minecraft vibe just as they were on a Second Life Vibe back in the last wave. Paul Mason and BBC Newsnight came and did a piece with me and the team of fellow Virtual World pioneers in 2007 We were starting to establish a major sub culture in the corporate world of innovators and explorers looking into who we could use virtual worlds and how it felt. Counter culture of this nature in a corporate is not always popular. I have written this before but the day we filmed this for the BBC in the morning I had my yearly appraisal. My management line we not keen on virtual worlds not the success so gave me the lowest rating they could. I went straight for that (which is very annoying and disheartening) into an interview with Paul on camera. Sometimes you just have to put up with misguided people trying to derail you, other times you just get out of the situation and go do it elsewhere.
Anyway, Minecraft offers a great deal, it does allow mirror worlds, though it does allow for an other worldy approach. Most blocks do not obey gravity. You can build up and then a platform out with no supporting structure, you can dig tunnels and underground (the point of the mine in Minecraft and not worry about collapsing. You still have real world up and down left and right though. Most virtual worlds do this. Disney Infinity, Project Spark, Second Life, Opensim and the new Hi-Fidelity etc are all still avatars and islands at their core. People might mess with the scale of things and the size, size as the OS map or the Hard Drive machines and Guitars mentioned in this BBC piece on spectacular builds.
I feel we have another step to take in how we interact at distance or over time with others. Persistent virtual worlds allow us to either be in them, or even better to actually be them. My analogy here is that I could invite you to my mind and my thoughts, represented in a virtual environment. It may not be that those things are actual mirrors of the real world, they might be concepts and ideas represented in all sorts of ways. It may not mean gravity and ground need to exists. We are not going to get to this until we have all been through mirror work and virtual world experiences. That is the foundation of understanding.

Not all avatars and islands? from Ian Hughes

Whilst many people got the virtual worlds of Second Life et al back in 2006 we were only continuing the ground preparation of all the other virtual world pioneers before. Minecraft is the first experience to really lay the foundations (all very serious it is fun to play with too!). These simple 8 bit style virtual bricks are the training ground for the next wave of virtual worlds, user generated content and our experiences of one another ideas. They may be mirror worlds, they have great value. There is no point me building an esoteric hospital experience when we need to have real examples to train on. However there is a point to having a stylised galleon floating in space as a theme for our group counselling experience we created.
The test for the naysayers of “it looks like a game” is really dying off now it seems. It will instead be it does’t look like the right sort of game. This is progress and I still get a rush when I see people understand the potential we have to connect and communicate in business, art, education or just good old entertainment. We could all sit and scoff as say yeah we’ve done that already, yes we know that etc. I am less worried about that, there might be a little voice in me saying I told you so, or wow how wrong were you to try and put a stop to this you muppets, but it is a very quiet voice. I want every one to feel the rush of potential that I felt when it clicked for me, whatever their attitude was before. So bring it on 🙂

Scaling – It’s all relative

Holiday’s give you time to ponder and inject serendipitous thoughts and events into your life. For me this time is was seeing an episode of Coast telling part of the story of Albert Einstein in Belgium. It got me thinking about relativity and points of view that people approach things with. In particular this related to just how many people and things need to be in a virtual space at any one time.
Cloning me in OpenSim
If you arrive at metaverse interaction from a “business” perspective or are used to Second Life and/or Opensim then having more than 50 avatars in one region or space at a time is viewed as an unfortunate limitation. The point of view being used is that of needing to be a mass communication vehicle, running huge events for thousands of people who will all listen to your message. That however is a flat web mentality. A billboard, even with interaction and a sense of belonging.
If you arrive at metaverse interaction from a “games” perspective. Having 32 people in once space performing a task is pretty much the maximum (in general). Having more doesn’t always add to the experience. Ok so Eve Online has 60,000 users on one server but you don’t really get to see them. It is also the exception. A few games have dabbled with FPS scenes with 128 people battling. You battle however is with the few dug in around you seldom the people the other side of the map.
So we have two relative view points. It is odd that the “business” perspective where it is about meetings and communication, sharing ideas and understanding will use the objection of “oh only 50 people?” when in fact the gathering it needs are targeting small groups collaborating on fast ideas. The games industry direction is one of “whats the point having all that freedom and that many people” when that industry is about mass take up and use of produced resources.
If we had the ultimate virtual environment that could house an infinite number of objects and avatars at amazing resolutions would we be any better off in working out how to communicate or play with one one another?
It is the restrictions of the mediums themselves that brings the greatest creativity?
If everyone could log on and all be in one place surely we would still have sparse areas where no one was or over crowded areas where it seemed to busy you would use your client to not bother rendering quite so many people.
I am not saying we should not strive for more and bigger spaces, I would love to see them. I am also not saying lets stick with 32 soldiers in a war torn town. I am saying that everyone should consider the other perspectives of what they are trying to do and why and not discount the other view. The chances are the other view may be what you actually want to do.
I was also struck by scale and relative size whilst wandering around the amazing sand sculptures in portugal. It was like a texture free virtual world.

Each of these is crafted from millions of particles of sand. It is not just a surface rendering, the whole thing is sand right through. No computer style rendering shortcuts. So each particle is a physical pixel, a phixel ? It takes up space and can only be viewed by the number of people who can fit in the space. The constraints of the medium again make it interesting. There is a choice to sculpt in lots of materials, yet this huge gathering chose only to use sand, because of its properties and impact and a craft and talent required to use them.
This all blended in mentally with Minecraft.

The predlets got interested seeing me start to play and actually understand the point and pointlessness of Minecraft.
I was only really doing a few bits and pieces, but once you start digging and rebuilding, combining and digging some more it is (as it success has shown) a totally intriguing experience. I think it works because of its lego like qualities. You are restricted in the medium and in the resources, but challenges you set yourself become very real despite the old school doom like block graphics.
So once again it is all relative, it is not everything to everyone with one perfect ultra solution.
In the mean time it is good news that Intel is pushing forward with 1,000 avatars in a sim (i.e. more than most high end games would dream of going near) as reported here http://www.hypergridbusiness.com/2011/06/1000-avatars-soon-coming-to-a-region-near-you/ and there are some interesting papers on how Opensim scales and works from Intel here

Develop Brighton 2011 – featuring Virtual Worlds

I just got the great news that my application to present at the Evolve portion of this years Develop conference has been accepted. It will of course be an elective session. Having attended Develop the last couple of years, and really enjoyed the sessions I felt that the virtual world industry and the crossover into social media was missing. So this year I thought I would throw my hat in the ring and do a version of washing away cave paintings.
Develop 2011 Brighton
Evolve is a part of the conference on day 1 that is about emerging trends and tech. Michael Acton Smith of Moshi is keynoting the day too 🙂
I submitted this as my talk.

EVOLVE: So You Think Virtual Worlds Aren’t Important?
Session Details
User generated virtual worlds may seem a sideshow. Open source development is making inroads, with Opensim, into fully user generated environments hosted anywhere run by anyone. These are used for entertainment and in training medical staff or school education.

They are an innovative platform for the next wave of designers and programmers. We have shown, on the kids ITV show The Cool Stuff Collective, these virtual worlds are growing to be a distribution platform for goods that will not remain solely as digital properties. Virtual things are getting real. Come and pay attention to virtual worlds and the future.

Wish me luck and maybe see you there (early bird registration finishes june 15th )

In Washington DC for FCVW 2011

Right now I am in the US, in Washinton for the Federal Consortium of Virtual worlds as part of a delegation from e426.org and the IEEE to discuss virtual environment standards and eco systems. This is all very cool and forward thinking as twe are not just talking about file formats but about human experience.
Rather pleasingly though Dave Taylor has a poster session here and did a demo yesterday of the Missive Project. Around major incident simulations. This is a project very dear to my heart as various aspects of it are things that I built as Feeding Edge Ltd.
It is worth reading his post(er) and I am sure I will talk to a few more people at the conference about it.
Lots of the environments send and receive data form various sources. A web menu drives options, a remote virtual patient in a scenario model responds. I do all the brokering and message aggregation and interpretation in the middle. Old habits die hard, this was SL/opensim middleware.
It is great to catch up with friends old and new from the industry but if you need an ROI on virtual worlds (do you want to sit like this for 8 hours 🙂 )
Legroom? Sure virgin was better than this?
Also I discovered on the flight over just how pitiful the 3DS battery life is, no wi-fi on and it only lasted about 3.5 hours of playing ghost recon! Not good. Though it did make me watch The Social Network on the seat back TV instead, which was much better than I though it would be.
Anyway, off to the conference.

Cool Stuff Opensim and Metaverses (no Second Life) – S2Ep13

Metaverse’s are what I do and have done for a living for a good few years now. Just in case you did not know 🙂
This morning I finally got to see the piece on Opensim I did as part of my future tech slot on The Cool Stuff Collective. It has been odd getting to see it as I ended up with a duff DVD (it broke just before the piece), just as we sat down to watch some friends arrived so we had to put it on pause, but in the end I watched it.
Opensim on the TV
I managed to say Opensimulator (though we don’t say URL’s) and also get in a great reference to QuestHistory (From Rezzable), who kindly provided the OAR file for the greenies room that we flew around in on the show.
We had a 2 computer setup and were were talking to an OpenSim server I have running on a slicehost in the US. We were both running the Imprudence viewer, mine on the Mac and Sy on Windows. We had some picture in picture mixing too. So Sy had his view and I had mine. I got Sy to build a cube and then I modified it to illustrate the building tools.

My copy of the cool stuff set that we used was a 30 min build I did on a local opensim on the train one day.
The piece was a bit too long so some of the things we did didn’t make the edit but all the important pieces form co-creation, immersion and shared experience right up to and including the ability to drive 3d printers from these models we build and share together made it into the show.
My avatar had green spikey hair, my striped leather jacket and some felt tip trousers. These are all things that have a meaning and legacy for me, despite being “virtual”. The green spikes were part of my original Second Life avatar, the Jacket is my actual jacket photographed and textured as I wear in SL now and the felt tip trousers was something I coloured in when doing some painting with predlet 1.0 back in 2006. It would have been nice to have been able to talk about Second Life as an original seed and commercial operation, versus the open source opensim. However SL is really not geared up for 7-12 year old users so we could not encourage that. With the closing of the teen grid and the apparent ambivalence to education and establishment in general SL missing some great opportunities to make a difference again. So it was great to be able to talk about one that is actually doing things with QuestHistory well done Rezzable!
I gave Sy a cool stuff collective T shirt and skinny jeans 🙂 So we actually have more virtual t-shirts than real ones for the show? Anyone out there want to do some merchandising for the show then please contact archie productions. I am sure there is a market there!
It was interesting to be technically making live machinima too. It was an extra element of complexity that we had 2 cameras in world, a composite screen in the studio as a feed and also two real camera pointing at us. Now we have the studio in Opensim though we could do a whole show from there 🙂
Scale means nothing in virtual worlds
It has taken 26 shows to get to do Opensim. I had been using it as the emergency backup for any tech failures, but the 2 tech fails we have had on future tech did not fit nor would a quick piece do it justice so I am glad we waited.
Anyway, one more show left in the series next saturday. A Big Gadget Adventure piece. There is also a rather awesome gadget in the studio!

UPDATE – As the clips are no longer online at ITV there is a little bit on Youtube you can view You can catch up on the clips of the shows here, or if you are quick you can watch entire episodes on the ITV player here

Techie Post: Opensim and Freeswitch problems

I was just replying to a note asking about Opensim and Freeswitch based on the fact I have it working(ish) on Ubuntu on a cloud server. I have been meaning to share where I am up to, though I have not found a complete solution to the problem I am having. However this snippet may help someone, or they may be able to help me 🙂

My Opensim and Freewswitch are running Ubuntu Karmic on a cloud service. Both opensim and freeswitch get built rather than binary distributions. However the ini files and config should be the same (as thats the beauty of this).

Freeswitch works by really just opening a conference call which any avatar will dial into. I am not sure how much digging you have done into the problem but there are a few things to look out for.
I am assuming you have followed the main instructions for config etc http://opensimulator.org/wiki/Freeswitch_Module

I had this all running from 6.8 onwards but recently moved to 0.7 I noticed in bin/config-include that there was now a line in StandaloneCommon.ini for Freeswitch not just the Opensim.ini. I think I had to uncomment that when I moved to 0.7

The main thing is to make sure that Freeswitch is started first and ready , then spark up opensim.

The default for the region in Opensim is to not have voice enabled at a land parcel and region level. In Hippo I enter god mode and set both the estate options and the parcel options to allow voice. Having done that (only needed if reloading the terrain or something major, I usually logoff and log on again with the view and it gets enabled. That one has caught em a few times.

The other thing to consider is some of the viewers, in particular the more recent open source ones are not allowed to package the SLVoice components with them. That one caught be when I asked someone to use Hippo on windows. The Mac hippo is old enough to still have the voice DLL but the newer windows one did not have it! SL Viewer 1 should of course be OK, but I think it is still not advisable to use SL Viewer 2 on opensim. Imprudence discusses a patch http://imprudenceviewer.org/wiki/How_to_Re-enable_Voice_Chat which is how I found out the root of the problem

The server firewall is another thing to check, as is the client firewall. As the ports are potentially different as freeswitch is a completely different application to opensim and the viewer can get blocked.

Should all of the above be ok then its console time for both Opensim and Freeswitch. There are some spurious errors it seems as things try to establish connections. However if you get a couple of clients connected (assuming you get the voice enabled in teh client) you can go into freeswitch and use the sofia commands. sofia status and sofia status profile internal both give a bit of information. I am no expert on the commands but I have been able to see if clients have connected from opensim.

This is where we get to a problem I now have. Freeswitch seems to not be working for everyone who comes to my sim. In my testing I used a mac and a windows machine on my own network at home but both talking to the remote cloud server. That has always worked (though technically it shows as the same IP address twice). I patched in someone else across the country who was using SL viewer 1 and we had a conversation, so I thought it was all working. However a few other people, when we have tried a larger meeting, have experienced problems.

It has been a mix of using a viewer with no SLVoice in it, firewalls but also some strange timing behaviour. It is quite difficult to test but that is why I have been using the sofia commands.

The first person to connect will generally get hold music playing. The second person to connect will enter the call, there is a beep as that happens the hold music ends. However you cannot always hear anyone speaking.
I suspect this is the client firewall operating. i.e. Opensim will talk to freeswitch at a server to server level to patch the person in. the Sofia status tends to show me all the users patched in to the freeswitch console. After that its the SL clients doing the work. When I get no voice response it tends to break the call for everyone. I have not found out how to check that or tell freeswitch to ignore bad calls. Getting the once that break it to logoff so there is only 1 person (i.e. me) in the call sometimes takes a few minutes before the hold music kicks in again. There are some sofia reset commands for the profile that I have dabbled with.

It was only a few weeks ago I hit this snag, the people using my freeswitch are not always in a position to mess with their company firewalls so it has been hard to get a test rig that fails consistently to try and debug it.

I have been meaning to write this down somewhere to help others but wanted to try and fix the problem first, but not many people are using or trying freeswitch.

Having said that if you get the voice connected (probably the god mode parcel audio solution at the start of this) you may find it just works. It could be my server having some bandwidth or memory issues etc. Its a tricky one to spot.

Rezzable, Unity3d, Opensim FTW

The team at rezzable have a live demo up and running using Unity3d talking to Opensim and getting some of the geometry and packages from the opensim server.
Rezzable Unity3d Opensim
It is good to see this sort of experimentation happening and will lead to yet more people trying things out I think. I know there are more out there too so maybe there is some sort of unity3d coalition or opensource style federation that could get together and share to drive this forward?
As I have written before there are all sort of options(my most read post ever so it must be of interest) for how this can me mixed and matched, some of it is smoke an mirrors for user experience but none the less immersive.
The key is to consider other ways to achieve the goals of a particular virtual world expereince, from immersion to whether things need to be user created or not.
So well done Rezzable, and thankyou for sharing it so far 🙂

Opensim/Second Life Vs Unity3d

I actually get asked a lot about how Unity3d stacks up against Opensim/Second Life. This question is usually based on wanting to use a virtual world metaphor to replicate what Opensim/Second Life do but with a visually very rich browser based client such as Unity3d.
There is an immediate clash of ideas here though and a degree of understanding that Unity3d is not comparable in the usual sense with SecondLife/OpenSim.
At its very heart you really have to consider Opensim and Second Life as being about being a server, that happens to have a client to look at it. Unity3d is primarily a client that can talk to other things such as servers but really does not have to to be what it needs to be.
Now this is not a 100% black and white description but it is worth taking these perspectives to understand what you might want to do with either type of platform.
Everything from an Opensim style server is sent to all the clients that need to know. The shapes, the textures, the position of people etc. When you create things in SL you are really telling a server to remember some things and then distribute them. Clearly some caching occurs as everything is not sent every time, but as the environment is designed to be constantly changing in every way it has to be down to the server to be in charge.

Now compare this to an “level” created in Unity3d. Typically you build all the assets into the unity3d file that is delivered to the client. i.e. its a stand alone fully interactive environment. That may be space invaders, car racing, a FPS shooter or an island to walk around.

Each person has their own self contained highly rich and interactive environment, such as this example. That is the base of what Unity3d does. It understands physics, ragdoll animations, lighting, directional audio etc. All the elements that make an engaging experience with interactive objects and good graphic design and sound design.
Now as unity3d is a container for programming it is able to use network connectivity to be asked to talk to other things. Generally this is brokered by a type of server. Something has to know that 2,3 or many clients are in some way related.
The simplest example is the Smartfox server multiplayer island demo.
Smartfox is a state server. It remembers things, and knows how to tell other things connected to it that those things have changed. That does not mean it will just know about everything in a unity3d scene. It its down to developers and designer to determine what information should be shared.

In the case above a set of unity clients all have objects numbered 1, 2 and 3 in them. It may be a ball, a person and a flock of birds in that order.

When the first client moves object number 1 smartfox on your own remote web server somewhere in the ether is just told some basic information about the state of that ball. Its not here now its here. Each of the other unity clients is connected to the same context. Hence they are told by the server to find object number 1 and move it to the new position. Now in gaming terms each of those clients might be a completely different view of the shared system. The first 2 might be a first person view, the thirds might be a 2d top down map view which has no 3d element to it at all. All they know is the object they consider to be object number 1 has moved.
In addition object number 3 in this example never shares any changes with the other clients. The server does not know anything about it and in the unity3d client it claims no network resources.
This sort of game object is one that is about atmosphere, or one that has no real need to waste network sending changes around. In the island example form unity3d this is a flock of seagulls on the island. They are a highly animated, highly dynamic flock of birds, with sound, yet technically in each client they are not totally the same.
(Now SL and Opensim use principle this for things such as particles and clouds but that is designed in)
For each user they merely see and hear seagulls, they have a degree of shared experience.
Games constantly have to balance the lag and data requirements of sending all this information around versus things that add to the experience. If multiplayer users need to have a common point of reference and it needs to be precise then it needs to be shared. e.g. in a racing game, the track does not change for each person. However debris and the position of other cars does.
In dealing with a constantly changing environment unity3d is able to be told to dynamically load new scenes and new objects in that scene, but you have to design and decide what to do. Typically things are in the scene but hidden or generated procedurally. i.e. the flock of seagulls copies the seagull object and puts it in the flock.
One of the elements of dealing the network lag in shuffling all this information around is interpolation. Again in a car example typically if a car is travelling north at 100 mph there if the client does not hear anything about the car position for a few milliseconds it can guess where the car should be.
Very often virtual worlds people will approach a game client expecting a game engine to be the actual server packaged, like wise game focused people will approach virtual worlds as a client not a server.
Now as I said this is not black and white, but opensim and secondlife and the other virtual world runnable services and toolkits are a certain collection of middleware to perform a defined task. Unity3d is a games development client that with the right programmers and designers can make anything, including a virtual world.

*Update (I meant to link this in the post(thanks Jeroen Frans for telling me 🙂 but hit send too early!)
Rezzable have been working on a unity3d client with opensim, specifically trying to extract the prims from opensim and create unity meshes.

Unity3d and voice is another question. Even in SL and Opensim voice is yet another server, it just so happens than who is in the voice chat with you is brokered by the the main server. Hence when comparing to unity3d again, you need a voice server, you need to programatically hook in what you want to do with voice.
As I have said before though, and as is already happening to some degree some developers are managing to blend thing such as the persistence of the opensim server with a unity3d client.
Finally in the virtual world context in trying to compare a technology or set of technologies we actually have a third model of working. A moderately philosophical point, but in trying use virtual worlds to create mirror worlds at any level will suffer from the model we are basing it on, name the world. The world is not really a server and we are not really clients. We are all in the same ecosystem, what happens for one happens for all.

Helping organise thoughts, and speaking freely

Once again for my pitch at metameets I used a little bit of virtual world technology as a mental aid to help me figure out what it was I was going to say and in what order. I am finding thins an increasingly useful technique, so as this is the second time around for it I thought I would explain it again.
I created the presentation in the usual fashion a set of keynote/powerpoint slides, mostly because I had ones from other presentations that I wanted to thread in and re-use. Once I had the single threaded narrative sorted, and the slide in the right order a structure, for what is in effect a story, appeared.
In order to preserve this structure I took images from all the slides and put them into my local Opensim.
Once in there these simple flat panels take on a relationship to one another that just does not happen when you are flicking from slide to slide.
Snapshotmetameets_001
In the picture above the pitch really started at the back. The back row is the underpinning, an introduction. Seven related slides to go through at any pace, but ideally quite quickly.
Next is a challenging step forward, as the next row comes forward. To support that is a another row, which is held to the side a little as it is really optional. In this case its a slide about how it feels to attach things to your avatar. That allows free expression, no script but just a memory of a feeling.
Then the next row is group into three groups. This is a change of pace, a set of ideas with an example in each of the three.
Then the penultimate row as you come forward is the tying it all together picture. Ideally by the time I get here the other 3 ideas have started to make a little sense.
Then there is the final slide to finish on.
Rather than sit and read the slides or over rehearse I just had this image on my iphone of the layout. It is enough that I can see the sort of pages I had created and when presenting I can think where I am in the flow. So if there is some sort of distruption, or an idea that comes to mind whilst talking I can think quickly as to where it will fit in or if it has missed its chance.
This is really just storyboarding meets mindmapping, but the multiple dimensions of it and the visual memory of having been there is incredibly powerful and I recommend anyone try and structure some thoughts or ideas or keypoints in some way like this to see how it feels.
Does anybody else do this, I would love to hear stories about it ?

We have all the pieces – Unity3d, Opensim, Evolver, Smartfox

I have now had this conversation several times with people about the potential future that a mix of open source and open minded development may bring to the virtual world industry. Much of what is happening seems to be driven by some of the direction Second Life has taken or is taking, though not so much to follow it into corporate lockdown but to breakout and provide the flexibility and creativity that is needed for the next generation of virtual worlds.
In the early days of 2006 many of us said it would be great to be able to run a Second Life server, our own one under out control. That has taken a while to start to emerge, but it has emerged as an expensive product aimed at corporate IT departments. Luckily the opensource community had rallied and created the excellent Opensim. This ticks all the boxes of being able to be run locally, be run in the cloud, be provided as a service. So we have an extensible virtual world server ready to be built upon.
The other component missing was a more controllable and rich interface. Yes there is the Second Life Snowglobe open source client but the need to certify and lockdown variants to align with the product needs for Second Life means that lots of the flexibility is lost. Likewise the initial open source Linden based client was under a GPL licence which caused all sorts of development to not happen at the time it really could have done with it.
This is where Unity3d steps in I believe. It was Rob Smart who first started to show me this way back. Unity3d is a great front end, very flexible in how you build games and content for it.
This was a movie form back in September 2008, using a message from Second Life to several unity clients to create a cube. This is loose integration, telling one place something has happened and letting the other place get on with it.

Unity3d has a plugin architecture too. It runs in a browser or deploys to application platforms like mac and windows. The visuals can be made very good very quickly too. Unity3d needs a server of some sort to operate as a multiuser platform (though it does do some peer 2 peer) hence applications like Smartfox are ideal for producing Unity multiplayer and MMO style games.
However Opensim has all the other layers of things needed to maintain a virtual world. It has assets databases, chat, positional awareness, server side scripting (as does Second Life that is was originally based on of course).
So we have an extensible and easy to get hold of Unity3D client engine, and extensible and easy to get hold of Server/Persistence VW engine in Opensim. There may well be challenges in making the two understand one another but with the flexibility both sides of the equation that makes them very solvable. This is a high level view, Rob has some more detail here on the challenges. Add in some interoperation definition with Vastpark to help bind the two and make some mappings.
Throw into the mix an open minded avatar wizard such as Evolver. There we can build avatars that we know definitely can be dropped into Unity3d.
So…..I create an Evolver avatar, dropped as a resource bundle into a web deployed Unity3d client that tells the opensim server where I am in the coordinate system, and which bundle I am using. Other people with a Unity3d client see the rich detailed avatar and the shiny Unity3d environment. However we do not have to stick to that one client. Other people using a Second Life style client see the Second Life style rendering of the world?
This is already happening in some respects, the Iphone application Touch Life lets you logon to the public Second Life. In a sort of bugblatter beast of traal moment everyone can see you, but you cant see them. You navigate your avatar around the map, have full chat, inventory and economy access, but a very different view of the world to everyone else. (Of course Unity3d runs well on an Iphone too, so imagine that as an extension to Touch Life?)
Once there is an acceptance that there can be more that one view of the data, one where people without the full equipment can still see what is going on and participate things get a lot easier to consider. Whilst a gaming assumption tends to be we all need the same view at the same speed in order to be able to have balanced gameplay (lag gets you killed) in collaborative spaces, education, meetings and art galleries this is less of an issue.
As the parts of the jigsaw come together over the next year the ability to have the same experience will re-emerge.