blendedreality


Untethering Humans, goodbye screens

We are on the cusp of a huge change in how we as humans interact with one another, with the world and with the things we create for one another. A bold statement, but one that stands up, I believe, by following some historical developments in technology and social and work related change.
The change involves all the great terms, Augmented Reality, Virtual Reality, Blended Reality and the metaverse. It is a change that has a major feature, one of untethering, or unshackling us as human beings from a fixed place or a fixed view of the world.

Hovering
Here I am being untethered from the world in a freefall parachute experience, whilst also being on TV 🙂

All the great revolutions in human endeavour have involved either transporting us via our imagination to another place or concept, books, film, plays etc. or transporting us physically to another place, the wheel, steam trains, flight. Even modern telecommunications fit into that bracket. The telephone or the video link transport us to a shared virtual place with other people.

Virtual worlds are, as I may have mentioned a few times before, ways to enhance the experience of humans interacting with other humans and with interesting concepts and ideas. That experience, up to know has been a tethered one. We have evolved the last few decades becoming reliant on rectangular screens. Windows on the world, showing us text, images, video and virtual environments. Those screens have in evolved. We had large bulky cathode ray tubes, LED, Plasma, OLED and various flat wall projectors. The screens have always remained a frame, a fixed size tethering us to a more tunnel vision version. The screen is a funnel through which elements are directed at us. We started to the untethering process with wi-fi and mobile communications. Laptops, tablets and smartphones gave us the ability to take that funnel, a focused view of a world with us.

More recent developments have led to the VR headsets. An attempt to provide an experience that completely immerses us by providing a single screen for each eye. It is a specific medium for many yet to be invented experiences. It does though tether us further. It removes the world. That is not to say the Oculus Rift, Morpheus and HTC Vive are not important steps but they are half the story of untethering the human race. Forget the bulk and the weight, we are good at making things smaller and lighter as we have seen with the mobile telephone. The pure injection of something into our eyes and eyes via a blinkering system feels, and is, more tethering. It is good at the second affordance of transporting is to places with others, and it is where virtual world and the VR headsets naturally and obviously co-exist.

The real revolution comes from full blended reality and realities. That plural is important. We have had magic lens and magic mirror Augmented Reality for a while. Marker based and markerless ways to use one of these screens that we carry around or have fixed in out living rooms to show us digital representations of things places in out environment. They are always fun. However they are almost always just another screen in our screens. Being able to see feel and hear things in our physical environment wherever we are in the world and have them form part of that environment truly untethers us.
Augmented Reality is not new of course. We, as in the tech community, has been tinkering with it for years. Even this mini example on my old Nokia N95 back in 2008 starts to hint at the direction of travel.

Hackday5 Augmented Reality step 2

The devices used to do this are obviously starting at a basic level. Though we have the AR headsets of Microsoft Hololens, Google’s Magic Leap to start pushing the boundaries. They will not be the final result of this massive change. With an internet of things world, with the physical world instrumented and producing data, with large cloud servers offering compute power at the end of an wireless connection to analyse that data and to be able to visualize and interact with things in our physical environment we have a lot to discover and to explore.

I mentioned realities, not just reality. There is no reason to only have augmentation into a physical world. After all if you are immersed in a game environment or a virtual world you may actually choose, because you can, to shut out the world, to draw the digital curtains and explore. However, just as when you are engaged in any activity anywhere, things come to you in context. You need to be able to interact with other environments from which ever one you happen to be in.

Take an example, using Microsoft Hololens, Minecraft and Skype. In todays world you would have minecraft on you laptop/console/phone be digging around, building sharing the space with others. Skype call comes in from someone you want to talk to. You window away from Minecraft and focus on Skype. It is all very tethered. In a blended reality, as Hololens has shown you can have Minecraft on the rug in front of you and skype hanging on the wall next to the real clock. Things and data placed in the physical environment in a way that works for you and for them. However you may want to be more totally immersed in Minecraft and go full VR. If something can make a small hole in the real world for the experience, then it can surely make an all encompassing hole, thus providing you with only Minecraft. Yet, if it can place Skype on your real wall, then it can now place it on your virtual walls and bring that along with you.
This is very much a combination that is going to happen. It is not binary to be either in VR or not, in AR or not. It is either, both or neither.

It is noticeable that Facebook, who bought heavily into Oculus Rift have purchased Surreal Vision last month who specialize in using instrumentation and scanning kit to make sense of the physical world and place digital data in that world. Up until now Oculus Rift, which has really led the VR charge since its kickstarter (yes I backed that one!) has been focussed on the blinkered version of VR. This purchase shows the intent to go for a blended approach. Obviously this is needed as otherwise Magic Leap and Hololens will quickly eat into the Rifts place in the world.
So three of the worlds largest companies, Facebook, Google and Microsoft have significant plays in blended reality and the “face race” as it is sometimes called to get headsets on us. Sony have Project Morpheus which is generally just VR, yet Sony have had AR applications for many years with PS Move.
Here is Predlet 2.0 enjoying the AR Eyepet experience back in 2010 (yes five years ago !)

So here it is. We are getting increasingly more accurate ways to map the world into data, we have world producing IOT data streams, we have ever increasing ubiquitous networks and we have devices that know where they are, what direction they are facing. We have high power backend servers that can make sense of our speech and of context. On top of that we have free roaming devices to feed and overlay information to us, yes a little bit bulky and clunky but that will change. We are almost completely untethered. Wherever we are we can experience whatever we need or want, we can, in the future blend that with more than one experience. We can introduce others to our point of view or keep our point of view private. We can fully immerse or just augment, or augment our full immersion. WE can also make the virtual real with 3d printing!
That is truly exciting and amazing isn’t it?
It makes this sort of thing in Second Life seem an age away, but it is the underpinning for me and many others. Even in this old picture of the initial Wimbledon build in 2006 there is a cube hovering at the back. It is one of Jessica Qin’s virtual world virtual reality cubes.
Wimbledon 06 magic carpet
That cube and other things like it provided an inspiration for this sort of multiple level augmentation of reality. It was constrained by the tethered screen but was, and still is, remarkable influential.
Jessica Qin Holocube
Here my Second Life avatar, so my view of a virtual world is inside 3d effect that provides a view, a curved 3d view not just a picture, or something else. It is like the avatar is wearing a virtual Oculus rift it that helps get the idea across.
This is from 2008 by way of a quick example of the fact even then it could work.

So lets get blending then 🙂

Amiga kickstarter book, breaking mirror worlds & VR

A few days ago my copy of the book “Commodore Amiga: a visual Commpendium book” that I backed in Kickstarter arrived. The book is by Sam Dyer through BitmapBooks. It came with a load of extra goodies from my backing and my name along with my fellow backers vanity printed in the appendix. The only slight problem was that unlike all the other Kickstarter campaigns I wasn’t “allowed” to have a credit as epredator as it made the list look untidy unless we had normal names. That is the authors choice of course 🙂
Fond Amiga memories all in 1 book
My computer owning history went ZX81, Commodore 64 then Amiga 500 (and later 1200). The Amiga was 1987 and became my main machine for most of my polytechnic/university time. It caused me to get an overdraft for the first time to buy an external hard drive for a piece of work I was doing (that and to play the later cinemaware games that needed two floppy disk drives to work).
It was the machine I coded my final year project, which was a mix of hardware and software but also had to work on the much larger and more expensive Apollo computers we had.
It is the machine I spent ours with sellotaped together graph paper planning my SimCity builds and mapping Bards Tale Dungeons.
It is also the machine I first experience proper network gaming on with a null modem cable and F/A-18 flight simulator. Not only was that the first proper LAN party gaming but it forged the idea that machines do not have to have a consistent shared view of the world just because they are connected. The F/A-18 simulator let my good friend Wal and I fly around shooting at one another in a shared digital space. It was the early days of having a printer and being able to do “desktop publishing” aka DTP. I even produced a poster for our little event.
When we played we had different terrain packs running on each Amiga as we had different versions of the game. There was no server this was really peer to peer. The terrain was local to each Amiga, but the relative location of one another in that space was shared. Each machine was doing its own collision detection. It meant if I saw mountains I needed to avoid them, yet on the other machine that same local space might be flat desert. We all perceive reality differently anyway, but here we were forced to perceive and act according to whatever the digital model threw at us. In reality we kept to the sky and forcing your opponent into their own scenery was considered unsporting (though occasionally funny and much needed).
This set the precedent for me that whilst mirror worlds, virtual worlds that attempt to be 100% like a real place, have a reason to exist we do not have to play by the same physical rules of time and space in virtual environments.
Other things of note about the Amiga. Well I coded as predator on the Commodore 64 and that moved across to the Amiga too. The e was a later addition on the front by the principles are the same.
My wife also discovered gaming on my Amiga. Getting completely wrapped up and in the zone on Sim City and realising it was 4am. Later it would be Lemmings that caught her attention. Hence she is now elemming on twitter.
The book is full of classic images nearly all of which I have some sort of memory of that is more than yes I recognise that picture.
Games like Alien Breed (a gauntlet like top down shooter) and The Secret Of Monkey Island (a classic point and click humorous adventure) on their own rack up considerable hours of entertainment for very different reasons
Fond Amiga memories all in 1 book
Whilst fondly reminiscing and remembering things that impact how I think and work today I was also at the same time in current and future mode. Right next door on the table was my copy of Games(TM)
My life history in 1 picture #nearly #amiga #vr #metaverse
As I tweeted at the time My life history in 1 picture #nearly #amiga #vr #metaverse.
When we put on a headset, a total immersion one, we get a view of a world that is instantly believable. Something fed directly to our eyes and linked to the direct we are looking becomes a convincing reality. In a shared virtual world we will assume that we are all seeing the same thing. That does not have to be the case, as with the F/A-18 example. We can have different experiences yet share the same world. To help think about that consider the game Battleships. Each player has the same map, the same relative grid references on a piece of paper or on plastic peg board. yet on that map you can only see your own boats and any pieces of information you have gained through playing. When considering a mirror world or a virtual world build it can be harder to consider this. Yet many games and environments already have a little dollop of this behaviour with personal displays of health, ammo, speed etc in a Heads up Display. Those HUDs are an augmented reality display in a virtual world.
When we now consider the blended view headsets like the HoloLens and the MergeVR we are taking the real world as the server in effect. It is a fixed environment. We are then placing and enhancing what we see in that world with 3D overlays. Convincing the viewer the digital stuff is real.
Unlike the F/A-18 terrain the real world is there for each person. If there is a table in the middle of the room, even if you make it look like it is not for a headset wearer with object removal and clever visuals they will still trip over it. However the other way around can make for an interesting dynamic. headset wearers made to think there are obstacles and things in their way that they have to move around, but its different for each headset wearer. Just a little though experiment in perception. I didn’t even throw in anything about 3D printers actually making the new obstacles in the real world. That’s a bit much for a monday morning.
Anyway, the Amiga book is great. It was a fantastic period in games and in home technology, but we have many more exciting times coming.

A great week for science and tech, games, 3d printing and AR

There is always something going on in science and emerging technology. However some weeks just bring a bumper bundle of interesting things all at once. Here in the UK the biggest event had to been the near total eclipse of the Sun. We had some great coverage on the the TV with Stargazing live sending a plane up over the Faroe islands to capture the total eclipse. I was all armed and ready with a homemade pinhole camera.
Shredded wheat pinhole camera
This turned out great but unfortunately we were quite overcast here so it was of little use as a camera. I also spent the eclipse at Predlet 2.0 celebration assembly. They had the eclipse on the big screen for all the primary school kids to see. Whilst we had the lights off in the hall it did not get totally dark, but it did get a bit chilly. It was great that the school keyed into this major event that demonstrates the motion of the planets. So rather like the last one in 1999 I can certainly say I will remember where I was and what we were doing.(a conversation I had with @asanyfuleno on Twitter and Facebook)
This brings me on to our technological change and the trajectory we are on. In 1999 I was in IBM Hursley with my fellow Interactive Media Centre crew. A mix of designers, producers and techies and no suits. It was still the early days of the web and we were building all sorts of things for all sorts of clients. In particular during that eclipse it was some more work for Vauxhall cars. We downed tools briefly to look out across Hursley park to see the dusk settle in and flocks of birds head to roost thinking it was night.
It does not seem that long ago but… it is 16 years. When we were building those quite advanced websites Amazon was just starting, Flickr was 6 years away, Twitter about 7 years away, Facebook a mere 5 (but with a long lead time) and we were only on Grand Theft Auto II, still a top down pac man clone. We were connected to lots of our colleague son instant messaging but general communications were phone and SMS and of course email. So we were not tweeting and sharing pictures, or now as people do live feeds on Meerkat. Many people were not internet banking, trust in communications and computers was not high. We were pre dot.com boom/bust too. Not to mention no one really had much internet access out and about or at home. Certainly no wi-fi routers! We were all enthralled by the still excellent Matrix movie. The phone in that, the slide down communicator style Nokia being one of the iconic images of the decade.
NB. As I posted this I saw this wonderful lego remake of the lobby scene so just had to add it in this post 🙂

It was a wild time of innovation and one many of us remember fondly I think. People tended to leave us alone as we brought in money doing things no managers or career vultures knew to jump on. So that eclipse reminds me of a time I set on a path of trying to be in that zone all the time. I was back then getting my first samples from a company that made 3d printers as I was amazed at the principle, and I was pondering what we could do with designers that knew 3d and this emerging tech. We were also busy playing Quake and Unreal in shared virtual worlds across the LAN in our downtime so I was already forming my thoughts on our connection to one another through these environments. Having experiences that I still share today in a newer hi tech world where patterns are repeating themselves, but better and faster.
That leads me to another movie reference and in the spirit of staying in this zone. This footage of a new type of Terminator T-1000 style 3d manufacturing. 3D printers may not be mainstream as such but many more people get the concept of additive manufacture. Laying down layer after layer of material such as plastic. It is the same as we made coil clay pots out of snakes of rolled clay when we were at school. A newer form of 3D printing went a little viral on the inter webs this week from carbon3d.com. This exciting development pulls an object out of a resin. It is really the same layering principle but done in a much more sophisticated way. CLIP (Continuous Liquid Interface Production) balances exposing the resin to molecules to oxygen or to UV light. Oxygen keeps it as a liquid (hence left behind) and targeted UV light causes the resin to become solid, polymerization. Similar liquid based processes use lasers to fire into a resin. This one though slowly draws the object out of the resin. Giving it a slightly more ethereal or scifi look to it. It is also very quick in comparison to other methods. Whilst this video is going faster than actual speed it is still a matter of minutes rather than hours to create objects.

Another video doing the round that shows some interesting future developments is one from Google funded Magic Leap. This is a blended reality/augmented reality company. We already have Microsoft moving into the space with Hololens. Much of Magic Leap’s announcements have not been as clearly defined as one might hope. There is some magic coming and it is a leap. Microsoft of course had a great pre-release of Hololens, some impressive video but some equally impressive testimonials and articles from journalist and bloggers who got to experience the alpha kit. The video appeared to be a mock up but fairly believable.
Magic Leap were set to do a TED talk but apparently pulled out at the last minute and this video appeared instead.

It got a lot of people excited, which is the point, but it seems even more of a mock up video than any of the others. It is very ell done as the Lord of the Rings FX company Weta Workshop have a joint credit. The technology is clearly coming. I don’t think we are there yet in understanding and getting the sort of precise registration and overlays. We will, and one day it may look like this video. Of course it’s not just the tech but the design that has to keep up. If you are designing a game that has aliens coming out of the ceiling it will have a lot less impact if you try and play outside or in an atrium with a massive vaulted ceiling. The game has to understand not just where you are and what the physical space is like but how to use that space. Think about an blended reality board game, or an actual board game for that matter. The physical objects to play Risk, Monopoly etc require a large flat surface. Usually a table. You clear the table of obstructions and set up and play. Now a project board game could be done on any surface, Monopoly on the wall. It could even remove or project over things hung on the wall, obscure lights etc. It is relying on a degree of focus in one place. A fast moving shooting game where you walk around or look around will be reading the environment but the game design has to adjust what it throws at you to make it continue to make sense. We already have AR games looking for ghosts and creatures that just float around. They are interesting but not engaging enough. Full VR doesn’t have this problem as it replaces the entire world with a new view. Even in that there are lots of unanswered questions of design, how stories are told, cut scenes, attracting attention, user interfaces, reducing motion sickness etc. Blending with a physical world, where that world could be anywhere or anything is going to take a lot more early adopter suffering and a number of false starts and dead ends. It can of course combine with rapid 3d printing, creating new things in the real world that fit with the game or AR/BR experience. Yes thats more complexity, more things to try and figure out. It is why it is such a rich and vibrant subject.
Just bringing it back a little bit to another development this week. The latest in the Battlefield gaming franchise Battlefield Hardline went live. This, in case you don’t do games, is a 3d first person shooter. Previous games have been military, this one is cops and robbers in a modern Miami Vice tv style. One of the features of Battlefield is the massive online combat. It features large spaces and it makes you feel like a small spec in the map. Other shooters are more close in like Call of Duty. The large expanse means Battlefield can focus on things like vehicles. Flying helicopters and driving cars. Not just you though, you can be a pilot and deliver your colleagues to the drop zone whilst you gunner gives cover.
This new game has a great online multiplayer mode called hotwire that apps into vehicles really well. Usually game modes are capture the flag or holding a specify fixed point to win the game. In hotwire you grab a car/lorry etc and try and keep that safe. It means that you have to do some mad game driving weaving and dodging. It also means that you compatriots get to hand out of the windows of the car trying to shoot back at the bad guys. It is very funny and entertaining.
What also struck me was the 1 player game called “episodes”. This deliberately sticks with a TV cop show format as you play through the levels. After a level has finished the how you did page looks like Netflix with a next episode starts in 20 seconds down in the bottom right. If you quite a level before heading to the main menu it does a “next time in Battlefield Hardline” mini montage of the next episode. As the first cut scenes player I got a Miami Vice vibe which the main character then hit back by referencing it. It was great timing, and in joke, but one for us of a certain age where Miami Vice was the show to watch. Fantastic stuff.
I really like its style. It also has a logo builder on the website so in keeping with what I always do I built a version of the Feeding Edge logo in a Hardline style.
Battlefield Hardline Feeding Edge logo
I may not be great at the game, as I bounce around looking for new experiences in games, but I do like a good bit of customisation to explore.

MergeVR – a bit of HoloLens but now

If you are getting excited and interested, or just puzzling what is going on with the Microsoft announcement about Hololens and can’t wait the months/years before it comes to market then there are some other options, very real, very now.
Just before christmas I was very kindly sent a prototype of new headset unit that uses an existing smartphone as its screen. It is called MergeVR. The first one like this we saw was the almost satirical take on Oculus Rift that Google took with Google Cardboard. A fold up box that let you strap your android to your face.

MergeVR is made of very soft, comfortable spongy material. Inside are two spherical lenses that can be slid in and out lateral to adjust the divergence of you eyes and get a comfortable feel.
Rather like the AntVR I wrote about last time, this uses the principle of one screen, split into two views. The MergeVR uses you smart phone as the screen and it slides comfortably into the spongey material at the front.
Using an existing device has its obvious advantages. The smartphones already have direction sensors in them, and screens designed to be looked at close up.
MergeVR is not just about 3d experience of Virtual Reality (One where the entire view is computer generated). It is, by its very name Merge, about augmented reality. In this case it is an augmented reality taking your direct view of the world and adding data and visuals to it. This is knows as a magic lens. You look through the magic lens, and see things you would not normally be able to see. As opposed to a magic mirror which you look at a fixed TV screen to see the effects of a camera merging with the real world.

The iPhone (in my case) camera has a slot to see through on the in the MergeVR. This makes it very difference from some of the other Phone On Face (POF – made up acronym) devices. The extra free device I got with the AntVR, the TAW is one of these non pass through POF’s. It is a holder, and lenses with a folding mechanism to adjusts to hold the phone in place. With no pass through it is just to watch 3d content.


AntVR TAW
Okay so the MergeVR is able to let you use the camera, see the world, and then you can watch the screen close up without holding anything The lenses make you left eye look at the right half and the right eye at the left half. One of the demo applications is instantly effective and has a wow factor. Using a marker based approach a dinosaur is rendered in 3d on the marker. Marker based AR is not new, neither is iPhone AR, but the stereoscopic hands free approach where the rest of the world is effectively blinkered for you adds an an extra level of confusion for the brain. Normally if you hold a phone up to a picture marker, the code will spot the marker, the orientation of the marker and relative position in the view then render the 3d model on top. So if you, or the marker moves the model is moved too. When holding the iPhone up you can of course still see around it, rather like holding up a magnifying glass (magic lens remember). When you POF though your only view of the actual world is the camera view of the phone. So when you see something added and you move your body around it is there in your view. It is only the slight lag and the fact the screen is clearly not the same resolution or same lighting as the real world that causes you to not believe it totally.
The recently previewed microsoft Hololens and the yet to be seen Google funded Magic Leap are a next step removing the screen. They let you see the real world, albeit through some panes of glass, and then use project tricks near to the eye, probably very similar to peppers ghost, to adjust what you see and how it is shaded, coloured etc. based on a deep sensing of the room and environment. It is markerless room aware blended reality. Using the physical and the digital.

Back to the MergeVR. It also comes with a bluetooth controller for the phone. A small hand held device to let you talk to the phone. Obviously the touch screen when in POF mode means you can’t press any buttons 🙂 Many AR apps and examples like the DinoAR simply use your head movements and the sensors in the phone to determine what is going on. Other things though will need some form of user input. As the phone can see, it can see hands, but not having a Leap motion controller or a kinect to sense the body some simpler mechanism can be employed.
However, this is where MergeVR gets much more exciting and useful for any of us techies and metaverse people. The labs are not just thinking about the POF container but the content too. A Unity3d package is being worked on. This provides camera prefabs (Rather like the Oculus Rift one) that splits the Unity3D view into a Stereo Camera when running into the right shape and size, perspective etc for the MergeVR view. It provides extra access to the bluetooth controller inputs too.
This means you can quickly build MergeVR 3d environments and deploy to the iPhone (or Droid). Combine this with some of the AR toolkits and you can make lots of very interesting applications, or simply just add 3d modes to existing ones you have. With the new unity3d 4.6 user interfaces things will be even easier to have headsup displays.
So within about 2 minutes of starting Unity I had a 3d view up on iPhone on MergeVR using Unity remote. The only problem I had was using the usb cable for quick unity remote debugging as the left hand access hole was a little too high. There is a side access on the right but the camera need to be facing that way. Of course being nice soft material I can just make my own hole in it for now. It is a prototype after all.
It’s very impressive, very accessible and very now (which is important to us early adopters).
Lets get blending!

(Note the phone is not in the headset as I needed to take the selfie 🙂

Use Case 2 – real world data integration – CKD

As I am looking at a series of boiled down use cases of using virtual world and gaming technology I thought I should return to the exploration of body instrumentation and the potential for feedback in learning a martial art such as Choi Kwang Do.
I have of course written about this potential before, but I have built a few little extra things into the example using a new windows machine with a decent amount of power (HP Envy 17″) and the Kinect for Windows sensor with the Kinect SDK and Unity 3d package.
The package comes with a set of tools that let you generate a block man based on the the join positions. However the controller piece of code base some options for turning on the user map and skeleton lines.
In this example I am also using unity pro which allows me to position more than one camera and have each of those generate a texture on another surface.
You will see the main block man appear centrally “in world”. The three screens above him are showing a side view of the same block man, a rear view and interestingly a top down view.
In the bottom right is the “me” with lines drawn on. The kinect does the job of cutting out the background. So all this was recorded live running Unity3d.
The registration of the block man and the joints isn’t quite accurate enough at the moment for precise Choi movements, but this is the old Kinect, the new Kinect 2.0 will no doubt be much much better as well as being able to register your heart rate.

The cut out “me” is a useful feature but you can only have that projected onto the flat camera surface, it is not a thing that can be looked at from left/right etc. The block man though is actual 3d objects in space. The cubes are coloured so that you can see join rotation.
I think I will reduce the size of the joints and try and draw objects between them to give him a similar definition to the cutout “me”.
The point here though is that game technology and virtual world technology is able to give a different perspective of a real world interaction. Seeing techniques from above may prove useful, and is not something that can easily be observed in class. If that applies to Choi Kwang Do then it applies to all other forms of real world data. Seeing from another angle, exploring and rendering in different ways can yield insights.
It also is data that can be captured and replayed, transmitted and experienced at distance by others. Capture, translate, enhance and share. It is something to think about? What different perspectives could you gain of data you have access to?

A simple virtual world use case – learning by being there

With my metaverse evangelist hat on I have for many years, in presentations and conversations, tried to help people understand the value of using game style technology in a virtual environment. The reasons have not changed, they have grown, but a basic use case is one of being able to experience something, to know where something is or how to get to it before you actually have too. The following is not to show off any 3d modelling expertise, I am a programmer who can use most of the tool sets. I put this “place” together mainly to figure out Blender to help the predlets build in things other than minecraft.With new windows laptop, complementing the MBP, I thought I would document this use case by example.
Part 1 – Verbal Directions
Imagine you have to find something, in this case a statue of a monkey’s head. It is in a nice apartment. The lounge area has a couple of sofas leading to a work of art in the next room. Take a right from there and a large number of columns lead to an ante room containing the artefact.
What I have done there is describe a path to something. It is a reasonable description, and it is quite a simple navigation task..
Now lets move from words, or verbal description of placement to a map view. This is the common one we have had for years. Top down.
Part 2 – The Map
sniptop
A typical map, you will start from the bottom left. It is pretty obvious where to go, 2 rooms up and turn right and keep going and you are there. This augments the verbal description, or can work just on its own. Simple, and quite effective but filters a lot of the world out in simplification. Mainly because maps are easy to draw. it requires a cognitive leap to translate to the actual place.
Part 3 – Photos
You may have often seen pictures of places to give you a feel for them. They work too. People can relate to the visuals, but it is a case of you get what you are given.
snip1
The entrance
snip2
The lounge
snip3
The columned corridor
sni4
The goal.
Again in a short example this allows us to get quite a lot of place information into the description. “A picture paints a thousand words”. It is still passive.
A video of a walkthrough would of course be an extra step here, that is more pictures one after the other. Again though it is directed.You have no choice how to learn, how to take in the place.
Part 4 – The virtual
Models can very easily now be put into tools like Unity3d and published to the web to be able to be walked around. If you click here, you should get a unity3d page and after a quick download (assuming you have the plugin 😉 if not get it !) you will be placed at the entrance to the model, which is really a 3d sketch not a full on high end photo realistic rendering. You may need to click to give it focus before walking around. It is not a shared networked place, it is not really a metaverse, but it has become easier than ever to network such models and places if sharing is an important part of the use case (such as in the hospital incident simulator I have been working on)
The mouse will look around, and ‘w’ will walk you the way you are facing (s is backwards a,d side to side). Take a stroll in and out down to the monkey and back.
I suggest that now you have a much better sense of the place, the size, the space, the odd lighting. The columns are close together you may have bumped into a few things. You may linger on the work of art. All of this tiny differences are putting this place into you memory. Of course finding this monkey is not the most important task you will have today, but apply the principle to anything you have to remember, conceptual or physical. Choosing your way through such a model or concept is simple but much more effective isn’t it? You will remember it longer and maybe discover something else on the way. It is not directed by anyone, your speed your choice. This allows self reflection in the learning process which re-enforces understanding of the place
Now imagine this model, made properly, nice textures and lighting, a photo realistic place and pop on a VR headset like the Oculus Rift. Which in this case is very simple with Unity3d. You sense on being there is even further enhanced and only takes a few minutes.
It is an obvious technology isn’t it? A virtual place to rehearse and explore.
Of course you may have spotted that this virtual place whilst in unity3d to walk around provided the output for the map and for the photo navigation. Once you have a virtual place you can still do things the old way if that works for you. Its a Virtual virtuous circle!

Dear BBC I am a programmer and a presenter let me help

I was very pleased to see that the Tony Hall the new DG of the BBC wants to get the nation coding. He plans to “bring coding into every home, business and school in the UK”. http://www.bbc.co.uk/news/technology-24446046
So I thought, as I am lacking a full time agent in the TV world, I should throw my virtual hat in the ring to offer to work on the new programme that the BBC has planned for 2015.
It is not the first time I have offered assistance to such an endeavour, but this is the most public affirmation of it happening.
So why me? Well I am a programmer and have been since the early days of the shows on TV back in zx81/c64/bbc model a/b/spectrum days. I was initially self taught through listings in magazine and general tinkering before studying to a degree level, and then pursuing what has been a very varied career generally involving new tech each step of the way.
I was lucky enough to get a TV break with Archie Productions and the ITV/CITV show The Cool Stuff Collective, well documented on this blog 😉 In that I had an emerging technology strand of my own. The producers and I worked together to craft the slot, but most of it was driven by things that I spend my time sharing with C-level executives and at conferences about the changing world and maker culture.
It was interesting getting the open source arduino, along with some code on screen in just a few short minutes. It became obvious there was a lot more that could be done to help people learn to code. Of course these days we have many more ways to interact too. We do not have to just stick to watching what is on screen, that acts as a hub for the experience. Code, graphics, art work, running programs etc can all be shared across the web and social media. User participation, live and in synch with on-demand can be very influential. Collections of ever improving assets can be made available then examples of how to combine them put on TV.
We can do so much with open source virtual worlds, powerful accessible tools like Unity 3d and of course platforms like the Raspberry Pi. We can also have a chance to explore the creativity and technical challenges of user generated content in games. Next gen equipment like the Oculus rift. Extensions to the physical world with 3d printers, augmented reality and increasingly blended reality offer scope for innovation and invention by the next generation of technical experts and artists. Coding and programming is just the start.
I would love to help, it is such an important a worthy cause for public engagement.
Here is a showreel of some of the work.

There is more here and some writing and conference lists here
So if you happen to read this and need some help on the show get in touch. If you are an agent and want to help get this sort of thing going then get in touch. If you know someone who knows someone then pass this on.
This blog is my CV, though I do have a traditional few pages if anyone needs it.
Feeding Edge, taking a bite out of technology so you don’t have to.
Yours Ian Hughes/epredator

CGI people and horses, People as CGI

Ye si know some of us of a certain web age thill think CGI is “Common Gateway Interface” and gets us this thinking in Perl and about $POST and $GET but CGI is now commonly known as Computer Generate Imagery. Thinks have certainly changed over the last few years with respect to what can actually be generated and just how good it has got.
Last night I saw the new Galaxy chocolate ad. Normally if there is some CGI my brain goes into how did they do that mode, travelling around the uncanny valley. However this ad I watched and thought wow that actress looks just like Audrey Hepburn. I did a quick google to find that it was Audrey Hepburn the CGI version. Now watching it again I can see that it is CGI, though not all the time.

The other ad doing the rounds that is obviously CGI, unless horses really can moonwalk is 3’s dancing pony. Strangely it is the real life bits that jar with me on this. The close in on what are model hooves is jarring compared to the smoothness of the rest of it.

It is nice to see 3 doing a pony dance mixer as a youtube application. It is a lot more Monty Python than slick CGI but its worth a few moments to have a go 🙂 Mine is here
What I saw today though whilst researching more about Holodecks was this great twist on the sort of building projection projects that animate entire cityscapes. Here the project canvas is a person. This is really brilliant I think. Lots of the images certainly stick with you after watching it. It works by scanning the person (who then sits very still) the scan informs the projector how to deliver and map the textures onto the surface. The same way designers use a 3d application and create a UV map of the textures and even baked lighting.

This is more blended reality at work 🙂 It is similar to how the Kinect performs some visual trickery on screen (not yet via projection but maybe soon) in things like Kinect Party