augmentedreality


Excited tech celebs and a countdown – Meta #AR

There is a countdown running and people are showing their excitement. This time, not to an enclosed virtual reality headset but for a full computing platform Augmented Reality headset. It is Meta @metaglasses. I have talked about and written a lot (including the two novels!) about augmented reality and what it means when it is done properly. This teaser video features many luminaries of tech, pop culture and business evangelising the product based on the demo they have seen. Scoble seems particularly gushing about it. He posted an hours worth of that on Facebook after his intro demo, but as its all embargoed he could only talk generally.
With Hololens, Magic Leap and now Meta (assuming it is in the same category which at $600 – $3000 it will be) and of course my own fictional EyeBlend there is a lot going on that may leap frog the VR wave this time around.

This has been around and developing for a while. It was initially seeming to have to answer itself to Google Glass which was really just a heads up display not full AR. The early video show bug eye aviator inspired glasses but the latest pictures are more visor with a noticeable sensor bar.
will.i.am seems pretty interested in it to. As he says it opens up the possibilities for the arts. So maybe he will help me make the blended reality movie or series for Reconfigure?
This sort of kit goes past what we would call gaming and entertainment as it provides realtime feedback of the World and the Internet of Things. Helping us see the stuff we can’t see normally, but in situ. Again with Meta it will depend on how they build the physical model, or if they do, of the World in order to implement the digital in place views. Some earlier videos show the use of real world objects, a flat surface such as a box, being used as canvas tracking and a reference point for the digital content.
The countdown clock is ticking though, and the final hype is being ramped up. So it is exciting, however it turns out. Their countdown says 19 days left (best check the site as that is of course an out of date piece of text 🙂 )

Lucky 7 years – Feeding Edge birthday

Wow. It is seven years since I started Feeding Edge Ltd. That is quite a long while isn’t it? The past year has been a more difficult one with less work in the pipeline for most of it. It has meant I have had to take stock and look to do other things, whilst the World catches up. It does seem strange given the dawn of the new wave of Virtual Reality, and Augmented Reality that I have not managed to find the right people to engage my expertise and background in both regular technology and virtual worlds. In part that was because I was focussing on one major contract, when that dried up suddenly there was no where to go. It is starting from zero again to build up and sell who I am and what I do.
My other startup work has always been ticking along under the covers, as we try and work the system to find the right person with the right vision to fund what we have in mind. It is a big a glorious project, but it all takes time. Lots of no, yes but and even a few lets do it, followed by oh hang can’t now other stuff has come up.
On the summer holiday, I had a good long think about whether to give this all up and try and find a regular position back in corporate life, I was hit with a flash of inspiration for the science fiction concept. It was so obvious that I just had to give it a go. That is not to say I would not accept a well paid job with slightly more structure to help pay my way. However, the books flowed out of me. It was an incredibly exciting end to the year. Learning how to write and structure Reconfigure, how to package and build the ebook and the print version. How to release and then try and promote it. I have learned so much doing it that helps me personally, helps my business and also will help in any consulting work I do in the future. I realised too that the products of both Reconfigure and Cont3xt are like a CV for me. They represent a state of the virtual world, virtual reality, augmented reality and Internet of Things industry, combined with the coding and use of game tech that comes directly from my experiences, extrapolated for the purpose of story telling.
Write what you know, and that appears to be the future, in this case the near future.
This year I have also been continuing my journey in Choi Kwang Do. This time with a black suit on as a head instructor. It has led me to give talks to schools on science and why it is important, with a backdrop of Choi Kwang Do as a hook for them. I am constantly trying to evolve as a teacher and a student. Once again the reflective nature of the art was woven into the second book Cont3xt. I did not brand any of the martial arts action in the book as Choi Kwang Do as that may mis-represent the art and I don’t want to do that, but it did influence the start of the story with its more reflective elements, later on a degree of poetic licence kicked in, but the feelings of performing the moves is very real.
I have continued my pursuit of the unusual throughout the year. The books as a product provide, rather like the Feeding Edge logo has in the past, a vehicle to explore ideas.
I still really like my Forza 6 book branded Lambo, demonstrating the concept of digital in world product placement.

If you have read the books, and if not why not? they are only 99p, you will know that Roisin like Marmite. Why not ? I like Marmite, again write what you know. It became a vehicle and an ongoing thread in the stories, and even a bit of a calling card. It is a real world brand, so that can be tricky, but I think I use it in a positive way, as well as showing that not everyone is a fan. So the it is just another real world hook to make the science fiction elements believable. So I was really pleased when i saw that Marmite had a print your own label customisation. It is print on demand Marmite, just as my books are print on demand. It uses the web and the internet to accept the order and the then there is physical delivery. I know its a bit meta but thats the same pattern Roisin uses, just the physical movement of things is a little more quirky 🙂
Http://www.cont3xtbook.co.uk meets #marmite
I have another two jars on the way. One for Reconfigure and one for Roisin herself.
I am sure she will change her own twitter icon from the regular jar to one of these later as @axelweight Yes she does have a Twitter account, she had to otherwise she would not have been able to accidentally Tweet “ls -l” and get introduced to the World changing device @RayKonfigure would she?
All this interweaving of tech and experience, in this case related to the books, is what I do and have always done. I hope my ideas are inspirational to some, and one day obvious to others. I will keep trying to do the right thing, be positive and share as much as possible.
I am available to talk through any opportunities you have, anytime. epredator at feedingedge.co.uk or @epredator
Finally, last but not least, I have to say a huge thank you to my wife Jan @elemming She has the pressure of the corporate role, one that she enjoys but still it is the pressure. She is the major breadwinner. You can imagine how many 99p books you have to sell make any money to pay anything. She puts up with the downs whilst we at for the ups. Those ups will re-emerge, this year has shown that too me. No matter how bleak it looks, something happens to offer hope. I have some new projects in the pipeline, mostly speculative, but with all these crazy ideas buzzing around something will pop one day.
As we say in Choi Kwang Do – Pil Seung! which means certain victory. Happy lucky 7th birthday Feeding Edge 🙂

It’s alive – Cont3xt – The follow up to Reconfigure

On Friday I pressed the publish button on Cont3xt. It whisked its way onto Amazon and the KDP select processes. The text box suggested it might be 72 hours before it was life, in reality it was only about 2 hours, followed by a few hours of links getting sorted across the stores. I was really planning to release it today, but, well, it’s there and live.
I did of course have to mention it in a few tweets, but I am not intending to go overboard. I recorded a new promo video, and played around with some of the card features on YouTube now to provide links. I am just pushing a directors cut version. The version here is a 3 minute promo. The other version is 10 minutes explaining the current state of VR and AR activity in the real industry.


As you can see I am continuing the 0.99 price point. I hope that encourages a few more readers to take a punt and then get immersed in Roisin’s world.
Cont3xt is full of VR, AR and her new Blended Reality kit. It has some new locations and even some martial arts fight scenes. Peril, excitement and adventure, with a load of real World and future technology. Whats not to like?
I hope you give it a download and get to enjoy it as much as everyone else who has read it seems to.
This next stage in the journey has been incredibly interesting and I will share some of that later. For now I just cast the book out there to see whether people will be interested now there is the start of a series 🙂

Untethering Humans, goodbye screens

We are on the cusp of a huge change in how we as humans interact with one another, with the world and with the things we create for one another. A bold statement, but one that stands up, I believe, by following some historical developments in technology and social and work related change.
The change involves all the great terms, Augmented Reality, Virtual Reality, Blended Reality and the metaverse. It is a change that has a major feature, one of untethering, or unshackling us as human beings from a fixed place or a fixed view of the world.

Hovering
Here I am being untethered from the world in a freefall parachute experience, whilst also being on TV 🙂

All the great revolutions in human endeavour have involved either transporting us via our imagination to another place or concept, books, film, plays etc. or transporting us physically to another place, the wheel, steam trains, flight. Even modern telecommunications fit into that bracket. The telephone or the video link transport us to a shared virtual place with other people.

Virtual worlds are, as I may have mentioned a few times before, ways to enhance the experience of humans interacting with other humans and with interesting concepts and ideas. That experience, up to know has been a tethered one. We have evolved the last few decades becoming reliant on rectangular screens. Windows on the world, showing us text, images, video and virtual environments. Those screens have in evolved. We had large bulky cathode ray tubes, LED, Plasma, OLED and various flat wall projectors. The screens have always remained a frame, a fixed size tethering us to a more tunnel vision version. The screen is a funnel through which elements are directed at us. We started to the untethering process with wi-fi and mobile communications. Laptops, tablets and smartphones gave us the ability to take that funnel, a focused view of a world with us.

More recent developments have led to the VR headsets. An attempt to provide an experience that completely immerses us by providing a single screen for each eye. It is a specific medium for many yet to be invented experiences. It does though tether us further. It removes the world. That is not to say the Oculus Rift, Morpheus and HTC Vive are not important steps but they are half the story of untethering the human race. Forget the bulk and the weight, we are good at making things smaller and lighter as we have seen with the mobile telephone. The pure injection of something into our eyes and eyes via a blinkering system feels, and is, more tethering. It is good at the second affordance of transporting is to places with others, and it is where virtual world and the VR headsets naturally and obviously co-exist.

The real revolution comes from full blended reality and realities. That plural is important. We have had magic lens and magic mirror Augmented Reality for a while. Marker based and markerless ways to use one of these screens that we carry around or have fixed in out living rooms to show us digital representations of things places in out environment. They are always fun. However they are almost always just another screen in our screens. Being able to see feel and hear things in our physical environment wherever we are in the world and have them form part of that environment truly untethers us.
Augmented Reality is not new of course. We, as in the tech community, has been tinkering with it for years. Even this mini example on my old Nokia N95 back in 2008 starts to hint at the direction of travel.

Hackday5 Augmented Reality step 2

The devices used to do this are obviously starting at a basic level. Though we have the AR headsets of Microsoft Hololens, Google’s Magic Leap to start pushing the boundaries. They will not be the final result of this massive change. With an internet of things world, with the physical world instrumented and producing data, with large cloud servers offering compute power at the end of an wireless connection to analyse that data and to be able to visualize and interact with things in our physical environment we have a lot to discover and to explore.

I mentioned realities, not just reality. There is no reason to only have augmentation into a physical world. After all if you are immersed in a game environment or a virtual world you may actually choose, because you can, to shut out the world, to draw the digital curtains and explore. However, just as when you are engaged in any activity anywhere, things come to you in context. You need to be able to interact with other environments from which ever one you happen to be in.

Take an example, using Microsoft Hololens, Minecraft and Skype. In todays world you would have minecraft on you laptop/console/phone be digging around, building sharing the space with others. Skype call comes in from someone you want to talk to. You window away from Minecraft and focus on Skype. It is all very tethered. In a blended reality, as Hololens has shown you can have Minecraft on the rug in front of you and skype hanging on the wall next to the real clock. Things and data placed in the physical environment in a way that works for you and for them. However you may want to be more totally immersed in Minecraft and go full VR. If something can make a small hole in the real world for the experience, then it can surely make an all encompassing hole, thus providing you with only Minecraft. Yet, if it can place Skype on your real wall, then it can now place it on your virtual walls and bring that along with you.
This is very much a combination that is going to happen. It is not binary to be either in VR or not, in AR or not. It is either, both or neither.

It is noticeable that Facebook, who bought heavily into Oculus Rift have purchased Surreal Vision last month who specialize in using instrumentation and scanning kit to make sense of the physical world and place digital data in that world. Up until now Oculus Rift, which has really led the VR charge since its kickstarter (yes I backed that one!) has been focussed on the blinkered version of VR. This purchase shows the intent to go for a blended approach. Obviously this is needed as otherwise Magic Leap and Hololens will quickly eat into the Rifts place in the world.
So three of the worlds largest companies, Facebook, Google and Microsoft have significant plays in blended reality and the “face race” as it is sometimes called to get headsets on us. Sony have Project Morpheus which is generally just VR, yet Sony have had AR applications for many years with PS Move.
Here is Predlet 2.0 enjoying the AR Eyepet experience back in 2010 (yes five years ago !)

So here it is. We are getting increasingly more accurate ways to map the world into data, we have world producing IOT data streams, we have ever increasing ubiquitous networks and we have devices that know where they are, what direction they are facing. We have high power backend servers that can make sense of our speech and of context. On top of that we have free roaming devices to feed and overlay information to us, yes a little bit bulky and clunky but that will change. We are almost completely untethered. Wherever we are we can experience whatever we need or want, we can, in the future blend that with more than one experience. We can introduce others to our point of view or keep our point of view private. We can fully immerse or just augment, or augment our full immersion. WE can also make the virtual real with 3d printing!
That is truly exciting and amazing isn’t it?
It makes this sort of thing in Second Life seem an age away, but it is the underpinning for me and many others. Even in this old picture of the initial Wimbledon build in 2006 there is a cube hovering at the back. It is one of Jessica Qin’s virtual world virtual reality cubes.
Wimbledon 06 magic carpet
That cube and other things like it provided an inspiration for this sort of multiple level augmentation of reality. It was constrained by the tethered screen but was, and still is, remarkable influential.
Jessica Qin Holocube
Here my Second Life avatar, so my view of a virtual world is inside 3d effect that provides a view, a curved 3d view not just a picture, or something else. It is like the avatar is wearing a virtual Oculus rift it that helps get the idea across.
This is from 2008 by way of a quick example of the fact even then it could work.

So lets get blending then 🙂

Hololens – Now this is cool progress!

Yesterday Microsoft had a big series of announcement at its Build 2015 conference. The most exciting demonstration, IMHO, was on stage with Hololens. In this video they have a camera rendering the cameraman point of view of the digital content attached to real space, while the demo person is wearing their Hololens and operation from their perspective.

Taking applications that are floating in your heads up display, and attaching them to real world was, tables etc is very clever. Interacting with those objects even more so. The part at the end of this clip with the video screen and resizing it…. well who needs a TV now then?
This sort of blended reality (yes that term again I keep using as this is so much more than just augmentation of reality) creates a shared space that in this case the camera viewer is also allowed to see. However, as I mentioned in my last post there is no reason for the view to be the same or shared. Both modes can work. We can both be watching the same film but my weather desk object might be showing a completely different place to yours. Or we can how to completely different self contained shards of rooms yet in the same place. Throw in physical creation of objects in 3d printing or force feedback haptics and we have one heck of an environment to explore.
Microsoft have published a very slick video too of the hardware

They obviously have gone first on this tech but Google have Magic Leap waiting in the wings. There is a race on.
Whilst clearly many of us, not matter how much we have been around this sort of tech and around virtual worlds, are not going to just get sent a Hololens to build with we can still explore some of the principles using old fashioned tech like a smartphone and MergeVR/Cardboard etc and packages like Vuforia. Though Microsoft…. If you want to send me one or sign up a tech evangelist for this stuff (though it will sell itself) I am available anytime!
We have had AR to experiment with for a while but this wave is true 3d interaction with a markerless environment. Back in 2007! I wondered what it would be like to put an AR marker as a texture into a virtual world (in this case Second Life) and then point a camera with an AR application running at the screen that was rendering SL.
Avatar meets AR
The result was that the Ms Pacman in this slightly blurry picture, is not visible to my epredator avatar in Second Life. That client only sees the marker. Yet as we pan the camera around or look at the marker the second application, which is an AR magic lens, renders Ms PacMan.
Blended reality does not have to be solely one layer into the physical world. It is the easiest to demonstrate but we have so many more choices to explore and opportunities.
I made a little video to show the basic Vuforia demos in Unity3d that are marker based. This stuff just works these days. It is less faffing about than it used to be. It is fantastic to see everything get so accessible and slick.

This is a tipping point. Virtual content, virtual environments, virtual worlds get even more engaging with the more direct natural inputs we can make to our body. Full VR is able to remove the world and drop you somewhere that you can’t be or that doesn’t exist. Blended Reality is able to bring things that don’t exists to the space you inhabit. The interaction metaphors are going to be interesting too. Do you render a virtual waste bin to throw throw applications into when done or do you map that to a small crackling virtual fireplace that sits in the actual fireplace in your living room?
I am going to stop listing things as this will go on for a while. Time to go and build some things to share later.

A great week for science and tech, games, 3d printing and AR

There is always something going on in science and emerging technology. However some weeks just bring a bumper bundle of interesting things all at once. Here in the UK the biggest event had to been the near total eclipse of the Sun. We had some great coverage on the the TV with Stargazing live sending a plane up over the Faroe islands to capture the total eclipse. I was all armed and ready with a homemade pinhole camera.
Shredded wheat pinhole camera
This turned out great but unfortunately we were quite overcast here so it was of little use as a camera. I also spent the eclipse at Predlet 2.0 celebration assembly. They had the eclipse on the big screen for all the primary school kids to see. Whilst we had the lights off in the hall it did not get totally dark, but it did get a bit chilly. It was great that the school keyed into this major event that demonstrates the motion of the planets. So rather like the last one in 1999 I can certainly say I will remember where I was and what we were doing.(a conversation I had with @asanyfuleno on Twitter and Facebook)
This brings me on to our technological change and the trajectory we are on. In 1999 I was in IBM Hursley with my fellow Interactive Media Centre crew. A mix of designers, producers and techies and no suits. It was still the early days of the web and we were building all sorts of things for all sorts of clients. In particular during that eclipse it was some more work for Vauxhall cars. We downed tools briefly to look out across Hursley park to see the dusk settle in and flocks of birds head to roost thinking it was night.
It does not seem that long ago but… it is 16 years. When we were building those quite advanced websites Amazon was just starting, Flickr was 6 years away, Twitter about 7 years away, Facebook a mere 5 (but with a long lead time) and we were only on Grand Theft Auto II, still a top down pac man clone. We were connected to lots of our colleague son instant messaging but general communications were phone and SMS and of course email. So we were not tweeting and sharing pictures, or now as people do live feeds on Meerkat. Many people were not internet banking, trust in communications and computers was not high. We were pre dot.com boom/bust too. Not to mention no one really had much internet access out and about or at home. Certainly no wi-fi routers! We were all enthralled by the still excellent Matrix movie. The phone in that, the slide down communicator style Nokia being one of the iconic images of the decade.
NB. As I posted this I saw this wonderful lego remake of the lobby scene so just had to add it in this post 🙂

It was a wild time of innovation and one many of us remember fondly I think. People tended to leave us alone as we brought in money doing things no managers or career vultures knew to jump on. So that eclipse reminds me of a time I set on a path of trying to be in that zone all the time. I was back then getting my first samples from a company that made 3d printers as I was amazed at the principle, and I was pondering what we could do with designers that knew 3d and this emerging tech. We were also busy playing Quake and Unreal in shared virtual worlds across the LAN in our downtime so I was already forming my thoughts on our connection to one another through these environments. Having experiences that I still share today in a newer hi tech world where patterns are repeating themselves, but better and faster.
That leads me to another movie reference and in the spirit of staying in this zone. This footage of a new type of Terminator T-1000 style 3d manufacturing. 3D printers may not be mainstream as such but many more people get the concept of additive manufacture. Laying down layer after layer of material such as plastic. It is the same as we made coil clay pots out of snakes of rolled clay when we were at school. A newer form of 3D printing went a little viral on the inter webs this week from carbon3d.com. This exciting development pulls an object out of a resin. It is really the same layering principle but done in a much more sophisticated way. CLIP (Continuous Liquid Interface Production) balances exposing the resin to molecules to oxygen or to UV light. Oxygen keeps it as a liquid (hence left behind) and targeted UV light causes the resin to become solid, polymerization. Similar liquid based processes use lasers to fire into a resin. This one though slowly draws the object out of the resin. Giving it a slightly more ethereal or scifi look to it. It is also very quick in comparison to other methods. Whilst this video is going faster than actual speed it is still a matter of minutes rather than hours to create objects.

Another video doing the round that shows some interesting future developments is one from Google funded Magic Leap. This is a blended reality/augmented reality company. We already have Microsoft moving into the space with Hololens. Much of Magic Leap’s announcements have not been as clearly defined as one might hope. There is some magic coming and it is a leap. Microsoft of course had a great pre-release of Hololens, some impressive video but some equally impressive testimonials and articles from journalist and bloggers who got to experience the alpha kit. The video appeared to be a mock up but fairly believable.
Magic Leap were set to do a TED talk but apparently pulled out at the last minute and this video appeared instead.

It got a lot of people excited, which is the point, but it seems even more of a mock up video than any of the others. It is very ell done as the Lord of the Rings FX company Weta Workshop have a joint credit. The technology is clearly coming. I don’t think we are there yet in understanding and getting the sort of precise registration and overlays. We will, and one day it may look like this video. Of course it’s not just the tech but the design that has to keep up. If you are designing a game that has aliens coming out of the ceiling it will have a lot less impact if you try and play outside or in an atrium with a massive vaulted ceiling. The game has to understand not just where you are and what the physical space is like but how to use that space. Think about an blended reality board game, or an actual board game for that matter. The physical objects to play Risk, Monopoly etc require a large flat surface. Usually a table. You clear the table of obstructions and set up and play. Now a project board game could be done on any surface, Monopoly on the wall. It could even remove or project over things hung on the wall, obscure lights etc. It is relying on a degree of focus in one place. A fast moving shooting game where you walk around or look around will be reading the environment but the game design has to adjust what it throws at you to make it continue to make sense. We already have AR games looking for ghosts and creatures that just float around. They are interesting but not engaging enough. Full VR doesn’t have this problem as it replaces the entire world with a new view. Even in that there are lots of unanswered questions of design, how stories are told, cut scenes, attracting attention, user interfaces, reducing motion sickness etc. Blending with a physical world, where that world could be anywhere or anything is going to take a lot more early adopter suffering and a number of false starts and dead ends. It can of course combine with rapid 3d printing, creating new things in the real world that fit with the game or AR/BR experience. Yes thats more complexity, more things to try and figure out. It is why it is such a rich and vibrant subject.
Just bringing it back a little bit to another development this week. The latest in the Battlefield gaming franchise Battlefield Hardline went live. This, in case you don’t do games, is a 3d first person shooter. Previous games have been military, this one is cops and robbers in a modern Miami Vice tv style. One of the features of Battlefield is the massive online combat. It features large spaces and it makes you feel like a small spec in the map. Other shooters are more close in like Call of Duty. The large expanse means Battlefield can focus on things like vehicles. Flying helicopters and driving cars. Not just you though, you can be a pilot and deliver your colleagues to the drop zone whilst you gunner gives cover.
This new game has a great online multiplayer mode called hotwire that apps into vehicles really well. Usually game modes are capture the flag or holding a specify fixed point to win the game. In hotwire you grab a car/lorry etc and try and keep that safe. It means that you have to do some mad game driving weaving and dodging. It also means that you compatriots get to hand out of the windows of the car trying to shoot back at the bad guys. It is very funny and entertaining.
What also struck me was the 1 player game called “episodes”. This deliberately sticks with a TV cop show format as you play through the levels. After a level has finished the how you did page looks like Netflix with a next episode starts in 20 seconds down in the bottom right. If you quite a level before heading to the main menu it does a “next time in Battlefield Hardline” mini montage of the next episode. As the first cut scenes player I got a Miami Vice vibe which the main character then hit back by referencing it. It was great timing, and in joke, but one for us of a certain age where Miami Vice was the show to watch. Fantastic stuff.
I really like its style. It also has a logo builder on the website so in keeping with what I always do I built a version of the Feeding Edge logo in a Hardline style.
Battlefield Hardline Feeding Edge logo
I may not be great at the game, as I bounce around looking for new experiences in games, but I do like a good bit of customisation to explore.

MergeVR – a bit of HoloLens but now

If you are getting excited and interested, or just puzzling what is going on with the Microsoft announcement about Hololens and can’t wait the months/years before it comes to market then there are some other options, very real, very now.
Just before christmas I was very kindly sent a prototype of new headset unit that uses an existing smartphone as its screen. It is called MergeVR. The first one like this we saw was the almost satirical take on Oculus Rift that Google took with Google Cardboard. A fold up box that let you strap your android to your face.

MergeVR is made of very soft, comfortable spongy material. Inside are two spherical lenses that can be slid in and out lateral to adjust the divergence of you eyes and get a comfortable feel.
Rather like the AntVR I wrote about last time, this uses the principle of one screen, split into two views. The MergeVR uses you smart phone as the screen and it slides comfortably into the spongey material at the front.
Using an existing device has its obvious advantages. The smartphones already have direction sensors in them, and screens designed to be looked at close up.
MergeVR is not just about 3d experience of Virtual Reality (One where the entire view is computer generated). It is, by its very name Merge, about augmented reality. In this case it is an augmented reality taking your direct view of the world and adding data and visuals to it. This is knows as a magic lens. You look through the magic lens, and see things you would not normally be able to see. As opposed to a magic mirror which you look at a fixed TV screen to see the effects of a camera merging with the real world.

The iPhone (in my case) camera has a slot to see through on the in the MergeVR. This makes it very difference from some of the other Phone On Face (POF – made up acronym) devices. The extra free device I got with the AntVR, the TAW is one of these non pass through POF’s. It is a holder, and lenses with a folding mechanism to adjusts to hold the phone in place. With no pass through it is just to watch 3d content.


AntVR TAW
Okay so the MergeVR is able to let you use the camera, see the world, and then you can watch the screen close up without holding anything The lenses make you left eye look at the right half and the right eye at the left half. One of the demo applications is instantly effective and has a wow factor. Using a marker based approach a dinosaur is rendered in 3d on the marker. Marker based AR is not new, neither is iPhone AR, but the stereoscopic hands free approach where the rest of the world is effectively blinkered for you adds an an extra level of confusion for the brain. Normally if you hold a phone up to a picture marker, the code will spot the marker, the orientation of the marker and relative position in the view then render the 3d model on top. So if you, or the marker moves the model is moved too. When holding the iPhone up you can of course still see around it, rather like holding up a magnifying glass (magic lens remember). When you POF though your only view of the actual world is the camera view of the phone. So when you see something added and you move your body around it is there in your view. It is only the slight lag and the fact the screen is clearly not the same resolution or same lighting as the real world that causes you to not believe it totally.
The recently previewed microsoft Hololens and the yet to be seen Google funded Magic Leap are a next step removing the screen. They let you see the real world, albeit through some panes of glass, and then use project tricks near to the eye, probably very similar to peppers ghost, to adjust what you see and how it is shaded, coloured etc. based on a deep sensing of the room and environment. It is markerless room aware blended reality. Using the physical and the digital.

Back to the MergeVR. It also comes with a bluetooth controller for the phone. A small hand held device to let you talk to the phone. Obviously the touch screen when in POF mode means you can’t press any buttons 🙂 Many AR apps and examples like the DinoAR simply use your head movements and the sensors in the phone to determine what is going on. Other things though will need some form of user input. As the phone can see, it can see hands, but not having a Leap motion controller or a kinect to sense the body some simpler mechanism can be employed.
However, this is where MergeVR gets much more exciting and useful for any of us techies and metaverse people. The labs are not just thinking about the POF container but the content too. A Unity3d package is being worked on. This provides camera prefabs (Rather like the Oculus Rift one) that splits the Unity3D view into a Stereo Camera when running into the right shape and size, perspective etc for the MergeVR view. It provides extra access to the bluetooth controller inputs too.
This means you can quickly build MergeVR 3d environments and deploy to the iPhone (or Droid). Combine this with some of the AR toolkits and you can make lots of very interesting applications, or simply just add 3d modes to existing ones you have. With the new unity3d 4.6 user interfaces things will be even easier to have headsup displays.
So within about 2 minutes of starting Unity I had a 3d view up on iPhone on MergeVR using Unity remote. The only problem I had was using the usb cable for quick unity remote debugging as the left hand access hole was a little too high. There is a side access on the right but the camera need to be facing that way. Of course being nice soft material I can just make my own hole in it for now. It is a prototype after all.
It’s very impressive, very accessible and very now (which is important to us early adopters).
Lets get blending!

(Note the phone is not in the headset as I needed to take the selfie 🙂

Working the Cloud – Launch Party

I was pleased to get a chance to head to London last Friday for the launch of Kate Russell’s new book Working the Cloud. Amazingly this is Kate’s first book which given how often she has provided interesting tech news on the TV on shows like BBC Click is pretty amazing. Mind you, I have not written a book yet either 😉
So yes I was at the launch party along with other media people and fellow tweeters, but I wouldn’t write about something unless I thought it had something going for it. This book and its supporting material certainly does.

So what is the book all about ? Well it is a fantastic curated collection of useful tools to help anyone who has not yet fully realised the power of the web in getting things done. Starting a business, communicating with customers, creating interesting content etc. it is all covered.
It is not merely a collection of URL’s as each is explained as to the benefits of a particular tool or web application. Each section also has more than one alternative and is also peppered with tips. This is because Kate is writing about things she has used, is using and in some cases maybe stopped using for a better alternative.
This is really an expression of Maker culture, but for the more apparent mainstream ability to just get on with things. Too often people want to read lots of instructions for each application, making decisions as if there was not way to change afterwards. I think the book points to a spirit of doing. There are a stack of tools out there, they are accessible, often free or very cheap and there is no excuse for not using them.
To prove the point that there is more than one way to uv unwrap a feline polygon model Kate has done more than just publish a book. There is the ongoing companion website http://workingthecloud.biz/ and apps for mobile devices too.
The cover of the book itself is an augmented reality trigger for Aurasma’s app too. Something worth having a look (and listen to) if you see a physical copy of the book in a shop. The intro (without sound on) can raise a few eyebrows someone is looking over your shoulder 😉
AR working the cloud
Kate also did a virtual book signing. Oddly I missed this as I was in a Choi Kwang Do class at the time. However it was a transmedia gig using Google Hangout

There were some tweets with @andypiper about the other first virtual book launch back in 2006 in Second Life. Which I remember very well 🙂
Virtual book signing for Longtail

This is of course a different sort of event 🙂 So we are not going to take Kate’s first away from her 😉 Though the next edition of the book must have some 3d virtual world communication in it. Who knows maybe I can guest write that 🙂
I am looking forward to Kate’s next book which she crowdsourced the funding for on Kickstarter to write some science fiction for the new Elite Game which again proves that she is doing all the things in the book not just writing about them.

CGI people and horses, People as CGI

Ye si know some of us of a certain web age thill think CGI is “Common Gateway Interface” and gets us this thinking in Perl and about $POST and $GET but CGI is now commonly known as Computer Generate Imagery. Thinks have certainly changed over the last few years with respect to what can actually be generated and just how good it has got.
Last night I saw the new Galaxy chocolate ad. Normally if there is some CGI my brain goes into how did they do that mode, travelling around the uncanny valley. However this ad I watched and thought wow that actress looks just like Audrey Hepburn. I did a quick google to find that it was Audrey Hepburn the CGI version. Now watching it again I can see that it is CGI, though not all the time.

The other ad doing the rounds that is obviously CGI, unless horses really can moonwalk is 3’s dancing pony. Strangely it is the real life bits that jar with me on this. The close in on what are model hooves is jarring compared to the smoothness of the rest of it.

It is nice to see 3 doing a pony dance mixer as a youtube application. It is a lot more Monty Python than slick CGI but its worth a few moments to have a go 🙂 Mine is here
What I saw today though whilst researching more about Holodecks was this great twist on the sort of building projection projects that animate entire cityscapes. Here the project canvas is a person. This is really brilliant I think. Lots of the images certainly stick with you after watching it. It works by scanning the person (who then sits very still) the scan informs the projector how to deliver and map the textures onto the surface. The same way designers use a 3d application and create a UV map of the textures and even baked lighting.

This is more blended reality at work 🙂 It is similar to how the Kinect performs some visual trickery on screen (not yet via projection but maybe soon) in things like Kinect Party

Making animation easier with QUMARION

I tweeted about this the other, but after it came up in the Q&A session at yesterdays blended reality pitch I realized I had not put any more here about this interesting device.
The QUMARION is rather like the posable wooden mannequins that artist use to practice drawing figures
Mannequin for drawing
It is instead fully instrumented with sensors to work with a digital description of a human skeleton.

So as you pose the figure that translates to poses in the 3d modelling package.
A purist 3d designer may regard that as undermining their skills with manipulating and understanding the interface on a 2d screen. However this came up as an answer to a question about blended reality as I was talking about how sometime the technology can get in teh way, other times it disappears and lets us use what to know to enhance an experience.
The QMARION is rather like using the real guitar in Rocksmith, it may be an appropriate tool for understanding and communicating with an application.
I know that when I use 3d packages there is a barrier in having to deal with a mental translation of a 2d representation. Being able to just pose a physical device and explain what is needed physically would work for me.
A long while ago I was trying to make some tennis animations for a well known Second Life project. I found myself standing and looking in a mirror, performing the action then sitting down making that action work on a very simple digital rig, but then I had to tune it so that it looked better for the screen. I had no motion capture which would obviously have helped in the first place, but it is the extra artistic interpretation and subtle tweaks that it would have helped a great deal to have had a hands on device to help.
Now this device is only input as far as I know so there is an obvious extension in using it as an output device too. If I mocap a move, but then the device can play that back in physical steps and frames then I could tweak and enhance it. Obviously in games there are some moves that just don’t exist, you cant get certain flips and jumps happening. You can however start with a basis of what you can do.
Again of course this relates to studying the forms in Choi Kwang Do. A physical, but digitally recorded recreation may help someone even more to understand. Also a mannequin can be made to hold a position that may be a transfer on one move to another that a person bound by the law of physics cannot. It becomes a physical pause button.
Another extension to the idea is that this restricts you to one rig. A component model that lets you build any size or shape of joints to create the armature for any creation would be incredible. Combine that with the ability to 3d print the components in the first place, but them together, have that create the rigged model and then animate away. There are some fantastic opportunities for people to create interesting things as this approach evolves.