future


Blink and you miss the future of sports broadcasting.

On the last Formula 1 race of the season (well done Jenson Button for becoming world champ the race before) not only was the coverage by the BBC brilliant, if nothing else for the fact that we had no adverts destroying the action and we had the return of “the Chain” as a them tune, but there was a little hint at the future of sports broadcasting.
This example may have come from the F1 company feed, or been part of the upgrade in racing brought by the Abu Dhabi circuit but it was the sort of thing we need to see more of.
The live video (replay) was blended into a rendered 3d model so that camera angles and analysis could be made after the event.

This is of course a little after the event and probably done manually in the tv suite, but as we did with Wimbledon and Second Life way back it is possible to rebuild live elements from data and allow viewers their own perspective on the action.
I hope the Olympics 2012 people were watching!

Thankyou to the audience at Smarter Technology

On tuesday night I was invited to pop along in Second Life to a Smarter Technology (sponsored by IBM ironically). Jon and Rissa kindly asked me to show and tell on the various elements of presenting in Second Life that I stumbled upon when preparing for the conference in Derry earlier in the year.
I basically had a pop at all the single screen powerpoints that we end up doing in world, just replicating what we do in offices. The space and dynamic nature of virtual worlds means we can do so much more. Part of what I do is wear the presentation elements, this is of course a hack around not being able to rez, but it works and has some nice side effects.
I used most of the flavours of the pitch that I wrote about here and yes we did get onto 3d printing. So whilst the presentation was about presentation styles I also tried to inject elements of the content in aswell so it was not overly meta.
I got Jon to set up my first page on the big screen, then also to break the build with a long wide collection of slide similar to the ones that can be seen on IQ which is a presentation trick people use to show the entire pitch and move along it (which is well on the way to breaking the PPT metaphor)
Getting ready for the show
I had also built some examples of the quest for 2d whiteboarding (which we still need) but how that can be done in a different way in these environments, we have dynamic creation, we have 3d immersion why stick to 2d?
chalk and talk
chalk and talk 2.0
(These shots were from when I popped along to check the space out so I did not grief people too much)
Still the favourite was the giant hands though I think.
Ian Hughes, CEO of Feeding Edge, Ltd @ Smarter Technology in ... on Twitpic
Photo by Ishkahbibel
So a huge thankyou to all who tuned in or attended. The discussions and questions we great. It was mixed mode in that text flew past and I answered mostly by voice, something that still takes some getting used to. However the sheer amount of text activity meant I knew everyone was there 🙂
I know right at the end we descended into Mac/Iphone/Xbox/Windows 7 discussions but its kind of like the parrot sketch for Monty Python its part of a gathering to discuss such things.
If anybody missed an answer to a question they asked, or I missed the question altogether in the live flow then please feel free to ping me inworld, or comment here or twitter or….. I am not hard to find 🙂
IQ and Hursley are still there in world, there are a few plots left to rent on Hursley and the half a sim on IQ is also available though it has a furrie colony borrowing it at the moment let me know if you need some SL space.

Sculpty fingers – Crossing worlds – Moving data

I just bumped into the excellent iphone application called Sculptmaster 3d by Volutopia it lets you build and manipulate a 3d model using touch and some simple tools. You can then take that model and export it as an OBJ file via email or just snapshot it.
It produces very organic results, it feels like modelling with clay in many respects. The key for me is that is is using a different interface, in this case the iphone. This makes it very accessible and available to people who want to just have a quick go.
IPhone sculpture
Given what I was up to yesterday I had to try and export it and upload it to 3dvia, which of course works fine.
Once it is there in 3dvia it is shareable via the 3dvia mobile app, just search for handmade by touch as the model name.
Again I know this is not live augmented reality but I added the sculpture to my garden.
In situ
That geotagged photo is then actually appearing in the live Flickr Layer in Layar so its another wheel within a wheel.
This is interoperability. Data and presence of that data is able to flow, people allowing import and export of user generated content, either live, or in more old fashioned batch means that we can combine applications and try ideas out instantly.
This sculpture could of course now be manufactured and created on a 3d printer, the data pattern could be sold, licenced adapted, dropped into other virtual environments. The loop is complete.

Augmenting Augmented Reality

Whilst having a god with the 3dvia iphone application I decided to see if I could use my Evolver model and upload it to 3dVia, and get myself composited into the real world. The answer is of course you can its just a 3ds model! However I then wondered what would start to happen if we started to augment augmented reality, even in the light sense. I had done some of this with the ARTag markers way back on eightbar. and also here putting the markers into the virtual world and then viewing the virtual world through the magic window of AR.

That of course was a lot of messing about. This however, was simple, handheld and took only a few seconds to augment the augmentation.
AR me
Or augment my view of Second Life.

So these are the sort of live 3d models we are likely to be able to move in realtime with the 3d elements in Layar very soon.
Likewise we should be able to use our existing virtual worlds, as much as the physical world to define points of interest.
I would like to see (and I am working on this a bit) an AR app that lets me see proxies of opensim objects in relative positions projected in an AR iphone app that lets be tag up a place or a mirror world building with virtual objects. Likewise I want to be able to move the AR objects from my physical view and update their positions in opensim.
All the bits are there, just nearly linked up 🙂
We know it works taking real world input and injecting that in a virtual world (that was Wimbledon et al.) or marker based approaches such as I did over a year ago with my jedi mind numbers

I am thinking that between things like the Second Life media API’s or direct Opensim elements knowing what is going on in the world, with a combination of sharing services for models like 3Dvia, and wizards like Evolver all jammed into smartphones like the iphone we have a great opportunity.

Layar reaches the iphone – AR for everyone?

You can tell I have a 3GS in my hands now, a few more hands on tests of apps. I had been waiting until the Augmented Reality service linkages were unleashed. Today it got even better with the Layar the AR browser appearing for download. It works really well, with either simple searches of google showing up in your heads up display as you pan your iphone (or Android phone) around.
The key is of course the layers. Different providers providing different sets of data, from twitter to flickr and even wikipedia.
It is so simple and stunningly effective.
Even better though (and this saves me a job having to do it) is that they are going from static pins and icons to animated 3d objects placed in AR space in November
Now surely this is where we get a to define the objects in relative space and store them in opensim as the source for the layer. I am thinking in particular of things like mirror world builds being able to easily be mapped to physical space. Tagging real world places so people can visit them in the mirror world and vice versa.
Even the elements of slight more obscure things, think of some very good blended real and virtual world applications (the list of which in my head deserves its own post very soon).
plane
Easy, free, relatively accessible with a popular device set. I think Layar has just set themselves up as the the AR equivalent of Second Life on virtual worlds, or Google in search.

Augmented Reality Magic

I just bumped into this magic video using some of the sort of Augmented Reality things that we are seeing emerge for real on devices like the iphones. The award winning magician Marco Tempest is making some of the close up magic we see on TV start to incorporate AR, but not in a way that is just cheating with a camera trick. Obviously at the moment some of the elements are clever live editing into the performance, but it is the more descriptive and engaging visualizations that help tell the story of the pack of cards that prove the most interesting.

The underlying techniques and tricks may well be the same ones used by many magicians, but the techno showmanship really appeals, whilst also being actually possible right now.

Explorers, Anchors and Historians – Who is the trouble maker?

Whilst out cycling and dealing with a rather annoying puncture on the cliff top overlooking the Solent I was struck by the description of a concept that I needed to think about some more and then share. The solent has often been the gateway to some great exploration, it has some significant history around Portsmouth and Southampton and it is also a safe port for many a vessel.
HMS Victory
This trinity of classification seems to apply to the adoption of new ideas, new ways of working, new products.
The Explorers go out and discover, they invent, they sing the praises of the new world. They risk a great deal, but the risk is the reward for many.
The Anchors offer a safe haven, keeping things in place, giving an explorer somewhere to launch from and return (triumphantly) to.
The Historians remember, capture keep and preserve both the good and the bad. They keep old ways of doing things alive.
We need all three, but sometimes the system goes out of balance and that can be any one of the set but the most frequent in my experience is the following.
The Explorers head out, discover and return. The Anchors worry about having to haul anchor, find ways and means of not accepting what the Explorer has found. The Historians back up the Anchors with the “this is the way we have always done it” or “it did not work last time so…”.
Generally the Explorer will battle with the Anchors to justify the expedition, but a strong Anchor just has not reason to move.
In reality the Explorer should target the Historians, showing them there are other ways of working to preserve, new exhibits and generally forget the Anchors altogether.
Why does this make sense, well if the Anchors are in control the system stops completely. The explorers wont go out and find anything, the anchors wont move on and so the historians have nothing to preserve as old. Everything will remain static and the same just as the anchor needs it.
If the Historians demand history to record, and the Explorers find new ports to anchor in and provide that change then the system works.
So, and excuse the pun, don’t let the Anchor’s drag us down.
It is not the free and fancy explorer that is the trouble maker here, nor the Historian seeking to preserve the old ways is the Anchor.
I speak as an explorer and sometime historian, which of the three are you?

Are you pointing that thing at me?

The appstore is now starting to fill up with AR applications. The next is Robot Vision thanks Tish for Tweeting it 🙂

So there we go, open up the things already on the device to the development community and we have leapt from a really subtle launch of what we used to call location based services for those simpler apps, “find an x” near me. Almost overnight in tech terms we now have magic window and magic mirror Augment Reality apps on a highly used highly consumer orientated device.
There is not really even time to use the word tipping point is there!

Bionic Eye – Augmented Reality on the Iphone (officially)

@kohdspace just tweeted this piece of news from Mobile Crunch.
After what amounted to some terms of service violation hacks where people built AR applications for the iPhone it seems that this Bionic Eye is an official AppStore approved version.
I don’t have an iPhone (yet) despite being a registered iPhone developer, but if the api’s are released and working I expect a huge flood of AR applications, and I suspect there are some very clever ideas yet to be implemented.
I want to be able to create my relationship between the data and the real world using a mirror world or control deck in 3d in somewhere like Second Life and then be able to effectively push new AR overlays and contexts out to people.
I think this will feel a little like the renamed Operation Turtle now called Hark but for AR. In fact Hark is a handy name isn’t it as it already has AR letters in it. There I go thinking out loud and inventing things.
I would love to hear if anyone has this bionic eye app already, or if anyone is working on a mashup with the virtual worlds we already have rather than just GPS overlays.