unity3d


EyeBlend better than an Apple Vision Pro – still

In 2016 I wrote Cont3xt the follow up to Reconfigure and part of that was an upgrade our lead character gets. She has access to an API that can manipulate the physical world like the virtual, she builds that up into a Unity application for her smart phone in the first book. The second she gets an EyeBlend headset, state of the art (and still just a little ahead of Apple and co’s attempts. So she ports the Unity app and upgrades the messaging system for this high bandwidth device and to enhance the experience. Rather like all the developers are doing right at the moment. Life imitates art πŸ™‚ Read Reconfigure first, but this might be a topical time to check out Cont3xt whilst you wait for the Apple demo in a local store next year. (Just Β£0.99 each for these adventures πŸ™‚ )

β€œAn EyeBlend? They are not on general release yet? How did you get that?” Her eyes lit up wildly as Roisin took in the news.Β 

β€œOh you know, I have a few friends who hack for a living.” Alex replied with an air of comedy superiority. 

β€œYou sure do! The EyeBlend is an awesome piece of kit. It does full blended reality applications. It uses field effect projection with live object tracking!” She said enthusiastically. β€œThey call it a hologram, though it really isn’t as such. You can forgive that though as the effect is the same, apparently.” Roisin was on her tech turf again. 

Amazing phone scanning and AR animation

In a break from the all day video calls and events I just quickly downloaded Qlone to my Iphone. I also printed out the black and white scanning reference map. This technique of a reference mat and moving the camera has been around for decades to varying degrees of success, so I was not sure how it would fair.

Superplastic

In about 1 minute of waving the phone around to fill the visual indicators on a virtual dome over this character I had a scan of him. A full colour 3D model in the app. I was so impressed with the initial quality I paid for the upgrade to it which did this remarkable thing.

Bear in mind the figure is just a bit of plastic, I did not highlight any joints or add any context to the model, but magically it just was able to walk around and animate itself instantly.

It can export to a multitude of 3D formats so I think I will have to get Unity back up and running again. Very impressed, well done Qlone

High end realtime CGI from Unity – compared to 1984 tech

Unity have released a short film created with the Unity3D Game engine. It shows some serious design talent at work, but is also remarkable because it is rendered in real time. Anyone who remembers the early 80’s when we use to try and ray trace on a home computer and had to wait 24 hours for one frame of shaded imagery to create a line at a time. Then in 1984 the Amiga had this realtime demo flying around of the animated Juggler.

Here is the new Unity trailer for a short film they have created to show off the lighting, textures, animation, camera effects etc. It is also quite a powerful piece that cuts through all that into feeling for the character.

Very impressive stuff I am sure you agree? That’s “just a game development tool” for you πŸ™‚

Augmented Reality about Augmented Reality

I thought it would be interesting to revisit some of the Augmented Reality tools that have Unity3d packages for them. The first I headed for was Vuforia, now owned by PTC. I am using the free version. It all worked straight from the install, though there are a lot of features to tinker with.
I wanted to use an image marker of my own, so naturally I used the book cover. There is a lot of AR and VR in the story so it all makes sense.
For the 3d models I used the current trial of Fuse/Mixamo now part of the Adobe. It lets you build a model and export it in a nice friendly Unity3d fix format. You would be suprised how these various formats for models and animations hide so much complexity and weird tech problems though. Eventually I found a couple of animations that fitted the mood. This is not a full action sequence from the book, but it could be. I have some more vignettes in mind to experiment with. I am just upgrading my windows machine so I can use the Kinect 2.0 for some of my own motion capture again. Finding the right animation, and then trying to edit it is really hard work. The addition of motion, especially in humans, hits a different part of the brain I think. It makes lots of things look not quite right.
Augmented Reality on Augmented Reality
So here is the video of augmented reality on a cover of a physical copy of an ebook novel telling the story with lots of augmented reality in it, if thats not too meta. Roisin rather sassily taps her for while the Commander lays down the law to her.

Reconfigure and Cont3xt are available right now to enjoy, and if at all possible, PLEASE write a review on Amazon, it makes a massive difference to the take up of the books.

Using the Real World – IoT, WebGL, MQTT, Marmite, Unity3d and CKD

All the technology and projects I have worked on in my career take what we currently have at the moment and create or push something further. Development projects of any kind will enhance or replace existing systems or create brand new ones. A regular systems update will tweak and fix older code and operations and make them fresher and new. This happens even in legacy systems. In both studying and living some of the history of our current wave of technology, powered by the presence of the Internet, I find it interesting to reflect of certain technology trajectories. Not least to try and find a way to help and grow this industries, and with a bit of luck actually get paid to do that. I find that things finding out about other things is fascinating. With Predlet 2.0 birthday party we took them all Karting. There was a spare seat going so I joined in. The Karts are all instrumented enough that the lap times are automatically grabbed as you pass the line. Just that one piece of data for each Kart is then formulated and aggregated. Not just with your group, but with the “ProSkill” ongoing tracking of your performance. The track knows who I am now I have registered. So if I turn up and rice again it will show me more metrics and information about my performance, just from that single tag crossing the end of lap sensor. Yes that IoT in action, and we have had that for a while.
Great fun karting. Yay for being faster than 9 year olds :)
The area of Web services is an interesting one to look at. Back in 1997, whilst working on very early website for a car manufacturer, we had a process to get to some data about skiing conditions. It required a regular CRON job to be scheduled and perform a secure FTP to grab the current text file containing all the ski resorts snowfall, so that we could parse it and push it into a form that could be viewed on the Web. i.e. it had a nice set of graphics around it. That is nearly 20 years ago, and it was a pioneering application. It was not really a service or an API to talk to. It used the available automation we had, but it started as a manual process. Pulling the file and running a few scripts to try and parse the comma delimited data. The data, of course, came from both human observation and sensors. It was collated into one place for us to use. It was a real World set of measurements, pulled together and then adjusted and presented in a different form over the Internet via the Web. I think we can legitimately call that an Internet of Things (IoT) application?
We had a lot of fancy and interesting projects then, well before their time, but that are templates for what we do today. Hence I am heavily influenced by those, and having absorbed what may seem new today, a few years ago, I like to look to the next steps.
Another element of technology that features in my work is the ways we write code and deploy it. In particular the richer, dynamic game style environments that I build for training people in. I use Unity3d mostly. It has stood the test of time and moved on with the underlying technology. In the development environment I can place 3D objects and interact with them, sometimes stand alone, sometimes as networked objects. I tend to write in C# rather than Javascript, but it can cope with both. Any object can have code associated with it. It understands the virtual environment, where something is, what it is made of etc. A common piece of code I use picks one of the objects in the view and then using the mouse, the virtual camera view can orbit that object. It is an interesting feeling still to be able to spin around something that initial looks flat and 2D. It is like a drones eye view. Hovering or passing over objects.
Increasingly I have had to get the Unity applications to talk to the rest of the Web. They need to integrate with existing services, or with databases and API’s that I create. User logons, question data sets, training logs etc. In many ways it is the same as back in 1997. The pattern is the same, yet we have a lot more technology to help us as programmers. We have self defining data sets now. XML used to be the one everyone raved about. Basically web like take around data to start and stop a particular data element. It was always a little to heavy on payload though. When I interacted with the XML dat from the tennis ball locations for Wimbledon the XML was too big for Second Life to cope with at the time. The data had to be mashed down a little, removing the long descriptions of each field. Now we have JSON a much tighter description of data. It is all pretty much the same of course. An implied comma delimited file, such as the ski resort weather worked really well, if the export didn’t corrupt it. XML version would be able to be tightly controlled and parsed in a more formal language style way, JSON is between the two. In JSON the data is just name:value, as opposed to XML value. It is the sort of data format that I used to end up creating anyway, before we had the formality off this as a standard.
Unity3d copes well with JSON natively now. It used to need a few extra bits of code, but as I found out recently it is very easy to parse a web based API using code and extra those pieces of information and adjust the contents of the 3d Environment accordingly. By easy, I mean easy if you are a techie. I am sure I could help most people get to the point of understanding how to do this. I appreciate too that having done this sort of thing for years there is a different definition of easy.
It is this grounding in real World pulling info data and manipulating it, from the Internet and serving it to the Web that seems to be a constant pattern. It is the pattern of IoT and of Big Data.
As part of the ongoing promotion of the science fiction books I have written I created a version of the view Roisin has of the World in the first novel Reconfigure. In that she discovers and API that can transcribed and described the World around her.
This video shows a simulation of the FMM v1.0 (Roisin’s application) working as it would for her. A live WebGL version that just lets you move the camera around to get a feel for it is here.

WebGL is a new target that Unity3d can publish too. Unity used to be really good because it had a web plugin that let us deploy applications, rich 3d ones, to any web browser not just build for PC, mac and tablets. Every application I have done over the past 7 years has generally had the web plugin version at its core to make life easier for the users. Plugins are dying and no longer supported on many browsers. Instead the browser has functions to draw things, move things about on screen etc. So Unity3d now generates the same thing as the plugin, which was common code, but creates a mini version for each application that is published. It is still very early days for WebGl, but it is interesting to be using it for this purpose as a test and for some other API interactions with sensors across the Web.
In the story, the interaction Roisin starts as a basic command line ( but over Twitter DM), almost like the skiing FTP of 1997. She interrogates the API and figures out the syntax, which she then builds a user interface for. Using Unity3d of course. The API only provides names and positions of objects, hence the cube view of the World. Roisin is able to move virtual objects and the API then, using some Quantum theory, is able to move the real World objects. In the follow up, this basic interface gets massively enhanced, with more IoT style ways of interacting with the data, such as with MQTT for messaging instead of Twitter DM’s as in the first book. All real World stuff, except the moving things around. All evolved through long experience in the industry to explain it in relatively simple terms and then let the adventure fly.
I hope you can see the lineage of the technology in the books. I think the story and the twists and turns are the key though. The base tech makes it real enough to start to accept the storyline on top. When I wrote the tech parts, and built the storyboard they were the easy bits. How to weave some intrigue danger and peril in was something else. From what I have been told, and what I feel, this has worked. I would love to know what more people think about it though. It may work as a teaching aid for how the internet works, what IoT is etc for any age group, from schools to boardroom? The history and the feelings of awe and anger at the technology are something we all feel at some point with some element of out lives too.
Whilst I am on real World though. One of the biggest constants in Roisin’s life is the like it or love it taste of Marmite. It has become, through the course of the stories, almost a muse like character. When writing you have to be careful with real life brands. I believe I have used the ones I have in these books as proper grounding with the real World. I try to be positive about everyone else products, brands and efforts.
In Cont3xt I also added in some martial arts, from my own personal experience again, but adjusted a little her and there. The initial use of it in Cont3xt is not what you might think when you hear martial art. I am a practitioner of Choi Kwang Do, though I do not specially call any of the arts used in the book by that name as there are times it is used aggressively, not purely for defence. The element of self improvement is in there, but with a twist.
Without the background in technology over the years and the seeing it evolve and without my own personal gradual journey in Choi Kwang Do, I would not have had the base material to draw upon, to evolve the story on top of.
I hope you get a chance to read them, it’s just a quick download. Please let me know what you think, if you have not already. Thank you πŸ™‚

It’s alive – Cont3xt – The follow up to Reconfigure

On Friday I pressed the publish button on Cont3xt. It whisked its way onto Amazon and the KDP select processes. The text box suggested it might be 72 hours before it was life, in reality it was only about 2 hours, followed by a few hours of links getting sorted across the stores. I was really planning to release it today, but, well, it’s there and live.
I did of course have to mention it in a few tweets, but I am not intending to go overboard. I recorded a new promo video, and played around with some of the card features on YouTube now to provide links. I am just pushing a directors cut version. The version here is a 3 minute promo. The other version is 10 minutes explaining the current state of VR and AR activity in the real industry.


As you can see I am continuing the 0.99 price point. I hope that encourages a few more readers to take a punt and then get immersed in Roisin’s world.
Cont3xt is full of VR, AR and her new Blended Reality kit. It has some new locations and even some martial arts fight scenes. Peril, excitement and adventure, with a load of real World and future technology. Whats not to like?
I hope you give it a download and get to enjoy it as much as everyone else who has read it seems to.
This next stage in the journey has been incredibly interesting and I will share some of that later. For now I just cast the book out there to see whether people will be interested now there is the start of a series πŸ™‚

Time for WebGL? Quicker virtual worlds in Unity3d

This weekend I helped a friend get a network connected Unity3D environment up and running. I have, of course, done this a lot. In fact it has generally been the focus of most of the dev projects the past 7 years. I guess that means I am past the ‘expert’, know it all, part and into mastery, when everything reminds you of how much you have yet to learn, in the vastness of a subject.
This time though I was faced with the deprecation of the Unity3D web plugin. I had seen the draggle facing out of the incredibly useful and powerful plugin due to the web browser developers taking away support gradually. I have used Unity3D for many years because of its flexibility in allowing me as a developer and creator to have the potential to target lots of running platforms from a single base project. The WebGl option has sat there for a while and I have experimented with the basics of what Unity3D creates. The problem is, when all the core routines are in the plugin, you provide your layer of code to make it do something. The Plugin is a collection of API’s and routines. When you switch to WebGL Unity3D, or any platform, has to provide a lot of the underlying function to support your application as part of the package. Also the plugin would typically have some permissions given to it by the user, that a WebGL page with a stack of javascript tucked away in a compressed file does not have.
Consequently I was not really expecting, without a lot of hacking, for my basic set of tools to work to allow me to create a standard multi user environment, and have it still run without the user having to do very much, other than access a web page. i.e. no install.
Surprisingly is did work though. I use Photon, and its default Worker demo as the basis to test things. When I pushed to WebGL instead of the web player it actually worked. There may be some backing off on which protocol it is using, as it negotiates the best way to talk to the other clients, but it worked without any hassle.
PUN demo altered and working
Interestingly it also worked, well nearly, on the iPhone. The Web plugin was never designed to work on any smartphones or tablets, instead you build specific apps for those devices. However, the basic demo actually started and connected on Safari on the Iphone. The problem was that the demo uses the old GUI code and not the fancy new Unity 5.0 user interface. So it has no idea about responding to touch inputs unless you put extra code into it. I may well give that a go to see where this deployment option is going. If it does actually work (despite not being supported) it further eradicates the need to package and build apps. People get the up to date version on serving.
Despite working straight away locally, I had a few more problems when i just placed the WebGL on my 1and1 server. The packaged up versions of all the code, with mine and Unity is stored in a several files a .jsgz, .memgz and .datagz which the unity loader.js wants to pull in. It decompresses them in memory at runtime. However the files were listed a .js, .mem and .data in the index.html. I think the generator looks for uncompressed versions first and then looks of the zipped versions. Somewhere in that I was get an error, but when I changed the generated index.html to look for the compressed files directly it actually worked.
I suffered from some caching problems with the .html file though, that made it seem it was not working. Having changed the internal URL’s and reloaded the page it still failed. Looking at the source I saw it had not changed. A shift/reload used to force a non cache version of the page, but that did not work, neither did closing safari. Renaming index to index2 did the trick. Some of this may be edge of network caching or 1and1, or the browser juts being too much in control. It is an old problem many developer have had over the years as the web has matured.
The good news, then, is that the Photon Unity Network – Free samples, with the Worker scene seem to connect across web browsers in a peer to peer way, just as the old one used to with the plugin, but now in WebGL. I have not tried Unity’s new networking but I assume it will work also. If the packets flow then that means there is some hope for voice, though it is a packet hog and the packages are few and far between to do voice chat.
It actually took longer to get a shared project up and synching than it did to get a custom little virtual world working.
There will be a lot of details that don’t work, how graphics are rendered, lighting etc. The entire, mature Unity3D plugin has to be rebuilt in bits of javascript, which sounds crazy, for each application. However, there are some clever optimisation routines that only provide the bits of the code that are needed, it seems. So everything old is new again and I can get back to making online environments quickly for customers and projects.

Making Movies – Flying around in Unity3d

Yes, ok, my latest obsession of writing #Reconfigure might make a greta TV series or Movie but that’s not what this is about. Instead it is about another set of skills I seem to be putting into practice.
Before the I spent a day editing up the school end of year video. I didn’t shoot it, but I did use al the various messing around and comedy moments from the teachers and staff to put a montage together. It had a story, some pace, a step change or two. It was great fun. I have now been asked to edit a few more things together. I like editing moving images, it is fun. Though I am not charging for this is just to help people out.
At the same time I am in the middle of a little project to try and jazz up a regular presentation in the corporate space. I demoed a little virtual fly through a building, past a couple of characters and onto an image using Unity3d. It turned into a slightly more complex epic but non the less I am using lots of varied skills to make it work. The pitch will be pre canned but I have built the Unity environment so that it runs on rails, but live. I have c# code in there to help do certain things like play video textures, animation controllers and even NavMesh agents to allow a character to walk around to a particular room and lots of flying cameras. I had used the first few things a lot. It is stock Unity. However I had not really had a chance to use Unity animations. All my animations had been either remade FBX files or ones that I created in Cheetah3D. Once you imported an animation like that it was then sort of locked in place.
However, Unity3d has its own animation dope and curve sheets, and its really handy.
The animation Tab brings a different timeline. It is different to the Animation Controller screen that lets you string animation together and create transitions. e.g. jump or run from a walk.
The animation tab lets you select an object in the scene and create a .anim for you. It then by default adds that anim to an animation controller and adds that to the object.
Unity3d Animation Timeline
The record mode then allows you to expand any or all of the properties of the object, usually transform position and rotation, but it can be colour, visibility, basically anything with a parameter. If you move the timeline and then move the object, like a camera for instance, it creates the key frames for the changed parameters. It removes any of the awkward importing of paths and animation ideas from another tool. It is not great for figures and people. They are still best imported as skinned meshes with skeletons and let the import deal with all the complexity so you can add any stock animations to them. Whoever for a cut scene, or zoom past to see a screen on a wall it works really well.
I have written a lot of complicated things in Unity3d, but never bothered using this part. It is great to find it working, and doing exactly what I needed it to do for my cinematic cut scenes.
You can have things zooming and spinning around a known path in no time. You have to break it down a little otherwise it resolves the easiest path through a wall or something, but it seems very robust to edit it afterwards.
The only pity is that its not possible to lock a camera preview window open in stock Unity3d when selecting another object. Having got a camera pan just right its trick if you was to move an object in the camera view. With camera selected you see it’s preview, with the object selected you see it. Never got around that yet, just end up with trial and error.
No matter, it works, and its awesomely useful.

Emotiv Insight – Be the borg

Many years ago a few of us had early access, or nearly early access, to some interesting brain wave reading equipment from Emotiv. This was back in 2005. I had a unity arrive at the office but then it got locked up in a cupboard until the company lawyers said it was OK to use. It was still there when I left but did make its way out of the cupboard by 2010. It even helped some old colleagues get a bit of TV action
It was not such a problem with my US colleagues who at least got to try using it.
I got to experience and show similar brain controlling toys on the TV with the NeuroSky mindflex gym. This allows the player to try to relax or concentrate to control air fans that blow a small foam ball around a course.
So in 2013 when Emotiv announced a kickstarter for the new headsets, in this case the insight I signed right up.
We (in one of our many virtual world jam sessions) had talked about interesting uses of the device to detect user emotions in virtual worlds. One such use was if your avatar and how you chose to dress or decorate yourself caused discomfort or shock to another person (wearing a headset), their own display of you could be tailored to make them happy. It’s a modern take on the emperors new clothes, but one that can work. People see what they need to see. It could have worked at Wimbledon this year when Lewis Hamilton turned up to the royal box without a jacket and tie. In a virtual environment that “shock” would mean any viewers of his avatar would indeed see a jacket and tie.
It has taken 2 years but the Emotiv Insight headset has finally arrived. As with all early adopter kit though there are some hiccups. Michael Rowe, one of my fellow conspirators from back in the virtual world hey day, has blogged about his unit arriving
Well here is my shiny boxed up version complete with t-shirt and badge πŸ™‚
Emotiv insight has arrived
The unit itself feels very slick and modern. Several contact sensors spread out over your scalp and an important reference on the bone behind the ear (avoid your glasses) needs to be in place.
Though you do look a bit odd wearing it πŸ™‚ But we have to suffer for the tech art.
Emotiv headset
I charged the unity up via the USB cable waited for the light to go green. Unplugged the device and hit the on switch (near the green light). Here I then thought it was broken. It is something very simple but I assumed, never assume, that the same light near the switch would be the power light. That was not the case. The “on” light is a small blue/white LED above the logo on the side. It was right under where I was holding the unit. Silly me.
Emotiv insight power light
I then set about figuring out what could be done, if it worked (once I saw the power light).
This got much trickier. With a lot of Kickstarters they all have their own way of communicating. When you have not seen something appear for 2 years you tend not to be reading it everyday, following every link and nuance.
So the unit connected with Bluetooth, but not any bluetooth, it is the new Bluetooth (BTLE). This is some low power upgrade to bluetooth that many older devices do not support. I checked my Macbook Pro and it seemed to have the right stack. (How on earth did NASA manage to get the probe to Pluto over 9 years without the comms tech all breaking !)
I logged into the Emotiv store and saw a control panel application. I tried to download it, but having selected Mac it bounced me to my user profile saying I needed to be logged on. It is worrying that the simplest of web applications is not working when you have this brain reading device strapped to your head which is extremely complex! It seems the windows one would download, but that was no good.
I found some SDK code via another link but a lot of that seemed to be emulation. It also turns out that the iPhone 8 updates mashed up the Bluetooth LTE versions of any apps they had so it doesn’t work on those now. Still some android apps seem to be operational.
I left it a few days, slightly disappointed but figured it would get sorted.
Then today I found a comment on joining the google plus group (another flashback to the past) and a helpdesk post saying go to https://cpanel.emotivinsight.com/BTLE/
As soon as I did that, it downloaded a plugin, and then I saw the bluetooth pair with the device with no fuss. A setup screen flickerd into life and it showed sensors were firing on the unit.
The webpage looked like it should then link somewhere but it turns out there is a menu option hidden in the top left hand corner, with a submenu “detections”. That let me get to a few web based applications that atlas responded. I did not have much luck training and using it yet as often the apps would lockup, throw a warning message and the bluetooth would drop. I did however get a trace, my brain is in there and working. I was starting to think it was not!
Emotiv Insight Brain trace
So I now have a working brain interface, I am Borg, it’s 2015. Time to dig up those ideas from 10 years ago!
I am very interested in what happens when I mix this with the headsets like Oculus Rift. Though it is a pity that Emotiv seem to want to charge loyal backers $70 for an Emotiv unity plugin, that apparently doesn’t work in Unity 5.0 yet ? I think they may need some software engineering and architecture help. Call me, I am available at the moment!
I am alos interested, give how light the unit is, to trace what happens in Choi Kwang Do training and if getting and aiming for a particular mind set trace, i.e. we always try and relax to generate more speed and power from things that instinct tells you to tense up for. Of course that will have to wait a few weeks for my hip to recover. The physical world is a dangerous place and I sprained my left hip tripping over the fence around the predpets. So it’s been a good time to ponder interesting things to do whilst not sitting too long at this keyboard.
As ever, I am open for business. If you need an understand what is going on out here in the world of new technology and where it fits, that what feeding edge ltd does, takes a bit out of technology so you don’t have to.

Hololens – Now this is cool progress!

Yesterday Microsoft had a big series of announcement at its Build 2015 conference. The most exciting demonstration, IMHO, was on stage with Hololens. In this video they have a camera rendering the cameraman point of view of the digital content attached to real space, while the demo person is wearing their Hololens and operation from their perspective.

Taking applications that are floating in your heads up display, and attaching them to real world was, tables etc is very clever. Interacting with those objects even more so. The part at the end of this clip with the video screen and resizing it…. well who needs a TV now then?
This sort of blended reality (yes that term again I keep using as this is so much more than just augmentation of reality) creates a shared space that in this case the camera viewer is also allowed to see. However, as I mentioned in my last post there is no reason for the view to be the same or shared. Both modes can work. We can both be watching the same film but my weather desk object might be showing a completely different place to yours. Or we can how to completely different self contained shards of rooms yet in the same place. Throw in physical creation of objects in 3d printing or force feedback haptics and we have one heck of an environment to explore.
Microsoft have published a very slick video too of the hardware

They obviously have gone first on this tech but Google have Magic Leap waiting in the wings. There is a race on.
Whilst clearly many of us, not matter how much we have been around this sort of tech and around virtual worlds, are not going to just get sent a Hololens to build with we can still explore some of the principles using old fashioned tech like a smartphone and MergeVR/Cardboard etc and packages like Vuforia. Though Microsoft…. If you want to send me one or sign up a tech evangelist for this stuff (though it will sell itself) I am available anytime!
We have had AR to experiment with for a while but this wave is true 3d interaction with a markerless environment. Back in 2007! I wondered what it would be like to put an AR marker as a texture into a virtual world (in this case Second Life) and then point a camera with an AR application running at the screen that was rendering SL.
Avatar meets AR
The result was that the Ms Pacman in this slightly blurry picture, is not visible to my epredator avatar in Second Life. That client only sees the marker. Yet as we pan the camera around or look at the marker the second application, which is an AR magic lens, renders Ms PacMan.
Blended reality does not have to be solely one layer into the physical world. It is the easiest to demonstrate but we have so many more choices to explore and opportunities.
I made a little video to show the basic Vuforia demos in Unity3d that are marker based. This stuff just works these days. It is less faffing about than it used to be. It is fantastic to see everything get so accessible and slick.

This is a tipping point. Virtual content, virtual environments, virtual worlds get even more engaging with the more direct natural inputs we can make to our body. Full VR is able to remove the world and drop you somewhere that you can’t be or that doesn’t exist. Blended Reality is able to bring things that don’t exists to the space you inhabit. The interaction metaphors are going to be interesting too. Do you render a virtual waste bin to throw throw applications into when done or do you map that to a small crackling virtual fireplace that sits in the actual fireplace in your living room?
I am going to stop listing things as this will go on for a while. Time to go and build some things to share later.