vr


MergeVR – a bit of HoloLens but now

If you are getting excited and interested, or just puzzling what is going on with the Microsoft announcement about Hololens and can’t wait the months/years before it comes to market then there are some other options, very real, very now.
Just before christmas I was very kindly sent a prototype of new headset unit that uses an existing smartphone as its screen. It is called MergeVR. The first one like this we saw was the almost satirical take on Oculus Rift that Google took with Google Cardboard. A fold up box that let you strap your android to your face.

MergeVR is made of very soft, comfortable spongy material. Inside are two spherical lenses that can be slid in and out lateral to adjust the divergence of you eyes and get a comfortable feel.
Rather like the AntVR I wrote about last time, this uses the principle of one screen, split into two views. The MergeVR uses you smart phone as the screen and it slides comfortably into the spongey material at the front.
Using an existing device has its obvious advantages. The smartphones already have direction sensors in them, and screens designed to be looked at close up.
MergeVR is not just about 3d experience of Virtual Reality (One where the entire view is computer generated). It is, by its very name Merge, about augmented reality. In this case it is an augmented reality taking your direct view of the world and adding data and visuals to it. This is knows as a magic lens. You look through the magic lens, and see things you would not normally be able to see. As opposed to a magic mirror which you look at a fixed TV screen to see the effects of a camera merging with the real world.

The iPhone (in my case) camera has a slot to see through on the in the MergeVR. This makes it very difference from some of the other Phone On Face (POF – made up acronym) devices. The extra free device I got with the AntVR, the TAW is one of these non pass through POF’s. It is a holder, and lenses with a folding mechanism to adjusts to hold the phone in place. With no pass through it is just to watch 3d content.


AntVR TAW
Okay so the MergeVR is able to let you use the camera, see the world, and then you can watch the screen close up without holding anything The lenses make you left eye look at the right half and the right eye at the left half. One of the demo applications is instantly effective and has a wow factor. Using a marker based approach a dinosaur is rendered in 3d on the marker. Marker based AR is not new, neither is iPhone AR, but the stereoscopic hands free approach where the rest of the world is effectively blinkered for you adds an an extra level of confusion for the brain. Normally if you hold a phone up to a picture marker, the code will spot the marker, the orientation of the marker and relative position in the view then render the 3d model on top. So if you, or the marker moves the model is moved too. When holding the iPhone up you can of course still see around it, rather like holding up a magnifying glass (magic lens remember). When you POF though your only view of the actual world is the camera view of the phone. So when you see something added and you move your body around it is there in your view. It is only the slight lag and the fact the screen is clearly not the same resolution or same lighting as the real world that causes you to not believe it totally.
The recently previewed microsoft Hololens and the yet to be seen Google funded Magic Leap are a next step removing the screen. They let you see the real world, albeit through some panes of glass, and then use project tricks near to the eye, probably very similar to peppers ghost, to adjust what you see and how it is shaded, coloured etc. based on a deep sensing of the room and environment. It is markerless room aware blended reality. Using the physical and the digital.

Back to the MergeVR. It also comes with a bluetooth controller for the phone. A small hand held device to let you talk to the phone. Obviously the touch screen when in POF mode means you can’t press any buttons 🙂 Many AR apps and examples like the DinoAR simply use your head movements and the sensors in the phone to determine what is going on. Other things though will need some form of user input. As the phone can see, it can see hands, but not having a Leap motion controller or a kinect to sense the body some simpler mechanism can be employed.
However, this is where MergeVR gets much more exciting and useful for any of us techies and metaverse people. The labs are not just thinking about the POF container but the content too. A Unity3d package is being worked on. This provides camera prefabs (Rather like the Oculus Rift one) that splits the Unity3D view into a Stereo Camera when running into the right shape and size, perspective etc for the MergeVR view. It provides extra access to the bluetooth controller inputs too.
This means you can quickly build MergeVR 3d environments and deploy to the iPhone (or Droid). Combine this with some of the AR toolkits and you can make lots of very interesting applications, or simply just add 3d modes to existing ones you have. With the new unity3d 4.6 user interfaces things will be even easier to have headsup displays.
So within about 2 minutes of starting Unity I had a 3d view up on iPhone on MergeVR using Unity remote. The only problem I had was using the usb cable for quick unity remote debugging as the left hand access hole was a little too high. There is a side access on the right but the camera need to be facing that way. Of course being nice soft material I can just make my own hole in it for now. It is a prototype after all.
It’s very impressive, very accessible and very now (which is important to us early adopters).
Lets get blending!

(Note the phone is not in the headset as I needed to take the selfie 🙂

Another VR headset – AntVR

I backed a project called AntVR to produce another VR headset and it recently arrived. Whilst all these things may looks the same this is a slightly different approach the now famous Oculus Rift.
The box arrived via UPS, unfortunately there seems to have been a mistake on the shipping. Whilst this was included in the Kickstarter price I had to pay the delivery man another £35. That’s early adoption for you!

The main headset unit is about the same size and weight as a Oculus DK2. The main cables rather than passing over your back drop down either side of your cheeks.


The inside view is not two round lenses as with the rift but 2 much flatter rectangular lenses. It fits over my regular glasses (though being varifocals are not idea for any of these headsets)

The reason for the different in sense shape is quite fundamental to the entire design. The AntVR uses one screen inside the headset. The lenses direct the eye to focus on that screen. The Rift appears to have 2 screens one for each eye, each eye has a spherical image projected onto it. The AnyVR is really a regular monitor looked at up close.
Whilst the latter is a lesser experience it does have a major advantage. The AntVR has a mode button on it to let it just be a regular desktop screen. This makes for much less messing around (particularly in windows) with which monitor is which. Often with the Rift you find yourself trying to half look at the regular desktop when working in extended mode or swapping the primary monitor when an application doesn’t use direct mode.
So once in the AntVR you are looking at a big screen. The mode button on the unit lets you switch to side by side 3d. So anything, like youtube, that has SBS video you can watch straight away, looking at a normal view, navigate to the browser etc. Switch to full screen then press the button on the unit and you have 3d. I did find on Windows 8.1 internet explorer seems to refuse to show SBS video on youtube. The stub bar shows side by side images but the main window just the one. On chrome it was fine. Initially I though the headset was causing it but I think it was just the browser being the browser it is!
The unit has two sliders to move the lenses outward and inwards laterally to adjust for your own eye comfort.
The headset acts as a mouse pointer with the accelerometers in it moving the mouse around, which is not always very helpful to start with.
The kit then diverges from the Rift in what it provides.
Firstly it will plug into any HDMI source (needing USB to power it). Because it is not always in SBS 3D it doesn’t need the spherical processing the Rift has to do. Obviously that flexibility is a trade off with the field of view of the Rift and the comfort factor on the eyes.
The AntVR also comes with a peripheral. It is rather like some Klingon device, a few parts clip together and you have what looks like a gun controller.



This controller can also morph, with 2 handles plugged together, into a joypad. Both parts are charged by USB so are both active units.
In addition to the “gun” there is also a cleaning device, a small rubber bladder and nozzle for blowing dirt out of the lenses. They refer to this as the “hand grenade”.

All this came through the post from China. I suspect the delay getting here may have been some puzzled customs people.
There are demo games that are ready to download. These are designed to be 3d SBS, they are both horror/zombie ones. I am not sure if I configured things correctly, but the thumbstick on the back of the gun did allow me to move my character in an FPS environment. The gun trigger worked but it was a little odd using my head to turn and look at the bad guys but holding a gun that it didn’t matter where I pointed it. I think I needed to plug a few more things in.
Another interesting feature is a little sliding trap door underneath the headset that lets you peer down and see your hands. Great for typing !
So fr I have only tried the unit on my windows laptop. For gaming, Elite Dangerous for example the Rift DK2 with its head position sensor and spherical eye view is much more comfortable and immersive. However getting the thing running can be a pain with some many different ways windows likes to mess with graphics cards and extended monitors.
AntVR seemed to just work out of the box, the gun controller might be a bit over the top but the ability to work on anything and look at any content without too much hassle is interesting.
Next up I need to write about the two smartphone as a VR screen devices I have here, one of which was an extra thrown into the box with the AntVR.
We are creeping towards devices that anyone can use without too much faffing about but we have not got there yet. Then of course we need the content. The environments, the user interfaces and the new ways of engaging.

Digital Snow – Flush Magazine – 15

It’s starting to get towards winter, the clocks has fallen back so this months Flush Magazine, amongst other things, is all about snow, mountains, skiing and boarding. I just had something snow related to consider for my 14th article for the magazine. (That now sounds like a lot!)
I thought I should cover a bit go history, as usual. Some tech reminiscing. This time it was Hungry Horace goes skiing. Anyone else remember that ? 🙂
These sort of flashbacks to things that felt special or unusual then act a seed to zoom forward. This time I covered how snow has felt on screen in games and films through the years. From the simplistic to the “material point method for snow simulation” CGI research papers created by Disney in making films like Frozen.
I didn’t want to stick to pure virtual snow though but some other advances and where it could go in the near future to enhance our skiing experience. I did give a mention to SNOW the game as that looks mighty impressive and has some real potential IMHO.
Thanks to everyone writing for Flush Magazine, it’s great to have so many good articles to nestle in between on a cold winters night. Huge thanks to @tweetthefashion for such awesome production and editing and for still wanting me to write these journeys in history based futurism.
The full issue can be found below


I am zooming around on page 116 and the iOS version is here
All the past articles and links to them are here on the writing portfolio page
Right, where are my snow boots!

Oculus Rift – DK2 – Elite Dangerous – Breaking the Flow

Feeding Edge’s Oculus Rift DK2 (Development Kit version 2) arrived last week. I had an original Oculus Rift (Pre-Facebook buyout) form the Kickstarter campaign. It is amazing looking back and seen that I wrote about that in August 2013! How time flies!
The new DK2 is a big leap in technology, though it comes in a much less impressive cardboard box, rather than the foam filled attache case of the original one.

The new unit is smaller and lighter but boasts a huge increase in picture quality. There have been quite a few ergonomic tweaks too.

The placing of the wires to the unit are now in the centre top and the cable is fed over the top head strap so it doesn’t get in the way as much as the side dangling cable of the DK1.

The old control box form the DK1 is gone, this used to take all the connections and a power switch etc. It was often the first thing to fall on the floor as the cables got tangled up.
The eye pieces are noticeably bigger too, presumably to get all that extra resolution and field of view in.

Aside from the resolution of the screens the biggest change is the external tracking of the Rift’s position. DK1 knew you were turning and tipping but if you stepped or moved side to side it had no idea. DK2 comes with a sensor that need to be mounted and pointing at the rift. It can, with software that bothers to use it, tell if you are near of far right or left, up or down relative the the sensor. It is not a kinect, but it does give 3d space positioning.
My neighbours may glance up and see this tripod and thing it is facing out of the window, when in fact it is pointing at me!

Getting the rift running on the Mac was a doddle as usual. With no control box the connections are simply and HDMI for the display and two USB connections one for the camera device and one for the headset. The power lead runs into a tiny splitter block and a synch lead runs from that to the camera sensor.
It was more tricky getting it to run on windows 8.1. though. My HP laptop has 2 gfx cards and trying to persuade it to extend the display and put primary rendered graphics on the rift is a bit of a pain. I ended up with the primary display 90 turning itself 90 degrees and the secondary rift flipped 180 degrees in landscape. The demo/tuning application runs in direct mode so you don’t need any of this but lots of the other things rely on extended desktop. You can of course make rift your primary display but it is not then delivered a one image but a bit in the left eye and a bit in the right so it makes launching things difficult.
However after a bit of “well thats just windows isn’t it!” I sparked up the Elite Dangerous Beta. This had upgraded since the last time. Elite has a display on secondary option though this was not DK2 direct mode it was possible to make it work.
I was instantly blown away by the improvements to the visual resolution of DK2. My laptop managed to cope too with the environment and it may not be the best gaming rig in the world but it certainly worked. I launched out into space and got chills running down my spine with the sense of being there as the shadow of the cockpit struts moved across my field of view as I turned away from the star I was docked near. It is really really good. I mapped a button on the stick to re centre the rift view though as there was a slight drift. However 2 beta things and on windows 8.1 it was not a huge problem.
I flew around for a while taking in the scenery then got into a slight dogfight in space. Being able to pull tight turns but look up and backwards to spot the bad guy was really effective. However I did suffer a little from not being able to see my controls. Using a joystick with loads of buttons is great, but I did not realise how much I had relied on seeing button 7 ! So it did not take my adversary very long to break my ship down. In the previous beta of Elite the canopy would crack than the ship would blow and you ejected. The new version before the cracking I got sparks and smoke filling the cockpit. This was an incredible effect in the Rift. The combination of the sense of being there in the vastness of space, with the confinement of sitting in a smoking capsule is something I will remember as a key moment in my gaming and VR experiences. The rift makes sense in games like this as you probably would have a helmet on anyway. That is something a lot of games designers could consider. Rather than trying to pretend its not something you have on, embracing it and making it part of the game narrative.
I experienced a crash, in a typical windows style, whilst trying to dock as a space station. I can’t tell if it was the rift, the game or the machine that did it but at a key moment of concentration and full immersion the shock of being unplugged from the environment, to loose the cockpit and the space around me to be replaced by half of the windows desktop in each eye was horrendous. It seems the Rift and Elite where generating a different type of gaming Flow. I was not experiencing flow because the task had become automatic. I was concentrating on landing the ship in the right place. It required a lot of though. However, mentally I was experiencing sensory flow. I had stopped considering the medium as a barrier, or as anything other than real. For reality to be turned off with no warning created a mental jolt. Its a very strange feeling and one that took a few moments to analyse and think about. It shows the power of these imagination enhancers and our blended reality. I should be used to this stuff but it still hits me 🙂

Minecraft mirror worlds

Way back in 2006 when many of us started to use the persistent multi user networked virtual world with self created content (Second Life) in all sort so of place like business and education we thought we were on the cusp of a virtual world revolution. As a metaverse evangelist in a large corporation it was often an uphill struggle to persuade people that communicating in what looked like a game environment was valuable and worthwhile. It is of course valuable and worthwhile but not everyone see that straight away. In fact some people went out of their way to stop virtual environments and the people that supported and pioneered their use. Being a tech evangelist means patiently putting up with the same non arguments and helping people get to the right decision. Some people of course just can’t get past their pride, but you can’t win them all.
For me and my fellow renegades it started small in Second Life

I also captured many of the events over the course that first year in a larger set along with loads of blog posts on eightbar including this my first public post that always brings back a lot of memories of the time and place. It was a risky thing to post (if you are worried about career prospects and job security) to publicly post about something that you know is interesting and important but that is not totally supported.

One of the many ways people came to terms with virtual worlds was in the creation of mirror worlds. We all do it. In a world of infinite digital possibilities we ground ourselves by remaking out home, our office our locale. We have meeting rooms with seats and screens for powerpoint. This is not wrong. As I wrote in 2008 when we had a massive virtual worlds conference in London

It seems we are seeing many more mirror world applications appear once more and generally the builds are in Minecraft. It is interesting Minecraft has all the attributes of more detailed virtual worlds like Second Life and Opensim. It is a persistent virtual environment. It has the 8 bit block look and much less detailed and animated avatars, but it does have a very easy approach to building. You stack blocks. You still have to navigate the 3d space get you first person view in the right place and drop a block. (Many of the objects in 2006 were from people who felt they could not navigate 3d space easily something I think that has started to melt away as an objection). The relative resolution of a build in minecraft is low, you are not building with pixels but with large textured blocks. This does have a great levelling effect though. The richer environments require some other 3d and texturing skills to make things that look good. Minecraft is often compared to Lego. In Lego you can make very clever constructions but they still look like Lego. This becomes much less of an visual design challenge for those people less skilled in that art. A reduced pallet and single block type means not having to understand how to twist graphic primitives, upload textures and consider lighting maps etc. It is a very direct and instant medium. Minecraft has blocks that act as switches and wires that allows the creation of devices that react, but it does not have the feature that I used the most in Second Life and Opensim of being able to write code in the objects. There are ways to write code for Minecraft mods but it is not as instant as the sort of code scripts in objects in more advance virtual environments.Those scripts alter the world, respond to it push data to the outside and pull it back to the inside. It allows for a more blended interaction with the rest of the physical and digital world. Preserving state, creating user interfaces etc. It is all stuff that can be done with toolsets like Unity3d and Unreal Engine etc, but those are full dev environments. Scripting is an important element unless you are just creating a static exhibit for people to interact in. Minecraft also lack many of the human to human communication channels. It does not have voice chat by default though it is side loaded in some platforms. It has text chat but it is very console based and not accessible to many people. The social virtual worlds thrive on not just positional communication (as in Minecraft you can stand near something or someone) but on other verbal and non verbal communication.
The popularity of Minecraft has led to some institutions using it to create or wish to create mirror worlds. Already there was the the Ordanance Survey UK creation which “is believed to be the biggest Minecraft map made using real-world geographic data.” (Claiming firsts for things was a big thing back in 2006/7 for virtual worlds). It has just been updated to include houses not just terrain. By houses though this is not a full recreation of your house to walk into but a block showing its position and boundary. This map uses 83 billion blocks each at showing 25m of the UK, according to a story by the BBC
The BBC also reported this week on the British Museum looking at recreating all its exhibits and its buildings in Minecraft. This is an unusual approach to their exhibits. It will be an interesting build and they are asking the public to help. So once again Minecraft’s simplicity and accessibility will allow anyone to create the artefacts. It will, however, create a very low rez low texture experience. So it is a mirror world, in that it will be like the museum but it is more of a model village approach. You admire the talent to make it in the medium. It seems more of an art project than a virtual world project. It is all good though 🙂
The BBC seem to be on a Minecraft vibe just as they were on a Second Life Vibe back in the last wave. Paul Mason and BBC Newsnight came and did a piece with me and the team of fellow Virtual World pioneers in 2007 We were starting to establish a major sub culture in the corporate world of innovators and explorers looking into who we could use virtual worlds and how it felt. Counter culture of this nature in a corporate is not always popular. I have written this before but the day we filmed this for the BBC in the morning I had my yearly appraisal. My management line we not keen on virtual worlds not the success so gave me the lowest rating they could. I went straight for that (which is very annoying and disheartening) into an interview with Paul on camera. Sometimes you just have to put up with misguided people trying to derail you, other times you just get out of the situation and go do it elsewhere.
Anyway, Minecraft offers a great deal, it does allow mirror worlds, though it does allow for an other worldy approach. Most blocks do not obey gravity. You can build up and then a platform out with no supporting structure, you can dig tunnels and underground (the point of the mine in Minecraft and not worry about collapsing. You still have real world up and down left and right though. Most virtual worlds do this. Disney Infinity, Project Spark, Second Life, Opensim and the new Hi-Fidelity etc are all still avatars and islands at their core. People might mess with the scale of things and the size, size as the OS map or the Hard Drive machines and Guitars mentioned in this BBC piece on spectacular builds.
I feel we have another step to take in how we interact at distance or over time with others. Persistent virtual worlds allow us to either be in them, or even better to actually be them. My analogy here is that I could invite you to my mind and my thoughts, represented in a virtual environment. It may not be that those things are actual mirrors of the real world, they might be concepts and ideas represented in all sorts of ways. It may not mean gravity and ground need to exists. We are not going to get to this until we have all been through mirror work and virtual world experiences. That is the foundation of understanding.

Not all avatars and islands? from Ian Hughes

Whilst many people got the virtual worlds of Second Life et al back in 2006 we were only continuing the ground preparation of all the other virtual world pioneers before. Minecraft is the first experience to really lay the foundations (all very serious it is fun to play with too!). These simple 8 bit style virtual bricks are the training ground for the next wave of virtual worlds, user generated content and our experiences of one another ideas. They may be mirror worlds, they have great value. There is no point me building an esoteric hospital experience when we need to have real examples to train on. However there is a point to having a stylised galleon floating in space as a theme for our group counselling experience we created.
The test for the naysayers of “it looks like a game” is really dying off now it seems. It will instead be it does’t look like the right sort of game. This is progress and I still get a rush when I see people understand the potential we have to connect and communicate in business, art, education or just good old entertainment. We could all sit and scoff as say yeah we’ve done that already, yes we know that etc. I am less worried about that, there might be a little voice in me saying I told you so, or wow how wrong were you to try and put a stop to this you muppets, but it is a very quiet voice. I want every one to feel the rush of potential that I felt when it clicked for me, whatever their attitude was before. So bring it on 🙂

It’s ok to revisit ideas with new technology – VR, 3D, VW

I think that along with the “that’s always the way we have done things” line that gets thrown about whenever anything new challenges the status quo, the live “but we tried that before and it did not work” is equally dangerous.
I do partially agree that if you keep doing the same thing and get the same outcome you should try something else. The adjustment to something else may be to revisit and gently tweak the approach. If you are through dice and trying to get a double six and failing, you just have to throw them in a different way until you get to the right result to win the game. (Of course chaos theory and dependence on initial start conditions in a feedback loop means you always are with every throw but that’s another story).
At the weekend we visited the Milestones Museum in Basingstoke. It is just around the corner but we had not been. It is a giant hangar with a huge recreation of 1900’s and 1930’s streets and then some 50’s,60’s,60’s and 80’s artefacts too.
One thing that stuck me in the camera shop exhibit was this.

It is two lenses looking at two similar photos that gives a great stereoscopic effect. Now 3d photography and the concept of a picture for each eye has been around for a very very long time. This is a plastic construction probably from the 60’s. As kids I know lots of us had the iconic View-master to view 3d content. I distinctly remember the Superman 12 page comic that I had. I can still picture the images in my head.

View-Master Model G.jpg
View-Master Model G” by ThePassengerOwn work. Licensed under CC BY-SA 3.0 via Wikimedia Commons.

These devices let us see worlds, to engage with ideas albeit with an extra dollop of imagination.
These toys and tricks went out of favour and we headed into the digital age. We had a bit of a revisit of 3D with Tv’s and movies but they miss one crucial element. The old still 3d like View-master was like hint of reality, a story, a reading of a book. Films instead were a large amount of information, story and engagement. They were locked into the screen you were focussed on. It is an art form based on a framed window.
Video games started, and continue to occupy that framed space but in amongst the platformers and the puzzle games we have found 3d virtual worlds. Spaces that we decide how to navigate through, from gun toting first person shooters to free roaming driving simulators. Each of theses tends to have the activity, shooting, driving etc as one aspect to control and the camera or view as another. Some of these worlds thrive on users coming together in these worlds and building or working together. They connect us as humans, proxies by a digital environment over great distances. It is why Microsoft are looking to buy Mojang, the creators of Minecraft for $2 billion, the virtual world is of value, the interaction is of value.
Now, we did have a virtual reality revolution back in the 80’s and one before that, headsets feeding our eyes two distinct images of environments, once that we could control by turning out head and looking. What we did not have then is the lightweight high end screens (as we see in smartphones), heavily commercialised location and orientation based technology (again as we see in smart phones) and a population who understands and wants to navigate in 3d space and metaverses (most kids will have played Minecraft or seen Minecraft and wanted to play or similar). So it makes a great deal of sense for us to now have the rise of the headset and of Virtual Reality to blend into these other advances of technology. The Oculus Rift, Morpheus et al.
Yes they get in the way of interacting with the physical world, but they give huge advantages in the virtual world adding to the sense of presence.
So if people say look we did 3d or look we did VR and it didn’t work last time then they are missing the fact that we are not just rolling the dice differently. We have a new set of dice, this with a design, conceived and created in a virtual world and printed to perfection. We may not roll a double six straight away but when we do we win a much bigger game.
We may look on this period in time as a clunky use of tech, but we may also loo on this with a curious fondness and a charm that leads us on to better things.
This example being the laughing sailor, you put your money in and it laughs, puppetry and animatronics at a very basic level but something endearing about it?

Lost archive returns – Opensim

A few years back (April 2011 !) amongst the many other things I got to bring to kids TV I did a piece on virtual worlds and in particular Opensim. None of the shows clips are on the web any more, which is a pity as they are still useful resources in explaining things. I found a rare lost archive of my Opensim piece with Sy Thomas . So I thought I would pop it onto youtube (I hope I am not in breach of any copyright!) as a flavour of the sort of thing we put on 39 shows on Saturday morning kids tv on ITV.
Opensim is still very much alive and well and available, virtual worlds like Minecraft do very similar things. I would have done a follow up on Minecraft anyway. Equally now we have the rise of the Virtual Reality headset and immersive 3d environments are having their 3rd or 4th renaissance. Of course in this I point out that 3d creative environments are also distribution platforms for 3d printing and manufacturing.

Archie Productions (who made Cool Stuff Collective) latest fantastic film about the Lancaster Bomber airs this weekend on 21:30 on BBC2 Britain’s Flying Past So check that out too 🙂

An interesting game tech workshop in Wales

Last week I took a day out from some rather intense Unity3d development to head off to North Wales to Bangor. My fellow BCS Animation and Games Dev colleague Dr Robert Gittins invited me to keynote at a New Computer Technologies Wales event on Animation and Games 🙂
It is becoming an annual trip to similar events and it was good to catch up with David Burden of Daden Ltd again as we always both seem to be there.
As I figured that many of the people there were going to be into lots of games tech already I did not do my usual type of presentation, well not all the way through anyway. I decided to help people understand the difference between development in a hosted virtual world like Second Life and developing from scratch with Unity3d. This made sense as we had Unity3d on the agenda and there were also projects from Wales that were SL related so I though it a good overall intro.
I have written about the difference before back here in 2010 but I thought I could add a bit extra in explaining it in person and drawing on the current project(s) without sharing too much of things that are customer confidential.

Why SL development is not Unity3d development from Ian Hughes

I did of course start with a bit about Cool Stuff Collective and how we got Unity3d on kids TV back on the haloween 2010 edition. This was the show that moved us from CITV to ITV prime saturday morning.
I added a big slide of things to consider in development that many non game developers and IT architects will recognise. Game tech development differs in content to a standard application, the infrastructure is very similar. The complication is in the “do something here” boxes of game play and the specifics of real time network interaction between clients. Which is different to many client server type applications (like the web)

After that I flipped back from tech to things like Forza 5 and in game creation of content, Kinect and Choi Kwang Do, Project Spark and of course the Oculus Rift. I was glad I popped that in as it became a theme throughout the pitches and most people mentioned it in some way shape of form 🙂

It was great to see all the other presentations too. They covered a lot of diverse ground.

Panagiotis Ritsos from Bangor University gave some more updates on the challenges of teaching and rehearsing language interpretation in virtual environments with EVIVA/IVY, the Second Life projects and now the investigations into Unity3d.

Llyr ap Cenydd from Bangor University shared his research on procedural animation and definitely won the prize for the best visuals as he showed his original procedural spider and then his amazing Oculus Rift deep sea experience with procedural generated animations of Dolphins.
Just to help in case this seems like gobbledegook. very often animations have been “recorded” either by someone or something being filmed in a special way that takes their movements and makes them available digitally as a whole. Procedural generation uses a sense and respond to the environment and the construction of the thing being animated. Things are not recorded but happen in real time because they have to. An object can be given an push or an impulse to do something, the rest is discovered but he collection of bits that make up the animated object. It is very cool stuff!

Just before the lunch break we had Joe Robins from Unity3d, the community evangelist and long term member of the Unity team show us some of the new things in Unity 5 and have a general chat about Unity. He also did a session later that afternoon as a Q&A session. It was very useful as there is always more to learn or figure out.
We all did a bit of a panel, quite a lot of talk about education of kids in tech and how to just let them get on with it with the teachers, not wait for teachers to have to become experienced programmers.
After lunch it was Pikachu time, or Pecha Kucha whatever it is called 🙂 http://www.pechakucha.org 20 slides each of 20 seconds in a fast fire format. It is really good, covers lots of grounds raises lots of questions.

David Burden of Daden Ltd went first. VR the Second Coming of Virtual Worlds exploring the sudden rise of VR and where it fits in the social adoption and tech adoption curves. A big subject, and of course VR is getting a lot of press as virtual worlds did. It is all the same, but different affordances of how to interact. They co-exist.

Andy Fawkes of Bohemia Interactive talked about the Virtual Battlespace – From Computer Game to Simulation. His company has the Arma engine that was originally used for Operation Flashpoint, and now has a spin of with the cult classic Day Z. He talked about the sort of simulations in the military space that are already heavily used and how that is only going to increase. An interesting question was realised about the impact of increasingly real simulations, his opinion was that no matter what we do currently we all still do know the difference and that the real effects of war are drastically different. The training is about the procedures to get you through that effectively. There has been concern that drone pilots, who are in effect doing real things via a simulation are to detached from the impact they have. Head to the office, fly a drone, go home to dinner. A serious but interesting point.

Gaz Thomas of The Game HomePage than gave a sparky talk on How to entrain 100 million people from your home office. Gaz is a budding new game developer. He has made lots of quick fire games, not trained as a programmer he wanted to do something on the web, set up a website but then started building games as ways to bring people to his site. This led to some very popular games, but he found he was cloned very quickly and now tries to get the mobile and web versions released at the same time. It was very inspirational and great to see such enthusiasm and get up and go.

Ralph Ferneyhough of newly formed Quantum Soup Studios talked about The New AAA of Development – Agile, Artistic, Autonomous. This was a talk about how being small and willing to try newer things is much more possible and needed that the constant churn in the games industry of the sequel to the sequel of the sequel. The sums of money involved and sizes of projects leads to stagnation. It was great to hear from someone who has been in the industry for a while branching out from corporate life. A fellow escapee, though from a different industry vertical.

Chris Payne of Games Dev North Wales gave the final talk on Hollywood vs VR:The Challenge Ahead. Chris works in the games industry and for several years has been a virtual camera expert. If you have tried to make cameras work in games, or played one where it was not quite right you will appreciate this is a very intricate skill. He also makes films and pop videos. It was interesting to hear about the challenges that attempting to do 360 VR films is going to have for what is a framed 2d medium. Chris showed a multi camera picture of a sphere with lenses poking out all around it, rather like the star wars training drone on the Millennium Falcon that Luke tries his light sabre with. This new camera shoots in all directions. Chris explain though that it was not possible to build one that was stereoscopic. The type of parallax and offsets that are needed can only really be done post filming. So a lot has to be done to make this giant 360 thing able to be interacted with in a headset like the rift. However that is just the start of the problems. As he pointed out, the language of cinema, the tricks of the trade just don’t work when you can look anywhere and see anything. Sets can’t have crew behind the camera as there is no behind the camera. Story tellers have to consider if you are in the scene and hence acknowledged or a floating observer, focus pulls to gain attention don’t work. Instead game techniques to attract you to the key story elements are needed. Chris proposed that as rendering gets better it is more likely that the VR movies are going to be all realtime CGI in order to be able to get around the physical problems of filming. It is a fascinating subject!

So it was well worth the 4am start to drive the 600 miles round trip and back by 10pm 🙂