future


Scaling – It’s all relative

Holiday’s give you time to ponder and inject serendipitous thoughts and events into your life. For me this time is was seeing an episode of Coast telling part of the story of Albert Einstein in Belgium. It got me thinking about relativity and points of view that people approach things with. In particular this related to just how many people and things need to be in a virtual space at any one time.
Cloning me in OpenSim
If you arrive at metaverse interaction from a “business” perspective or are used to Second Life and/or Opensim then having more than 50 avatars in one region or space at a time is viewed as an unfortunate limitation. The point of view being used is that of needing to be a mass communication vehicle, running huge events for thousands of people who will all listen to your message. That however is a flat web mentality. A billboard, even with interaction and a sense of belonging.
If you arrive at metaverse interaction from a “games” perspective. Having 32 people in once space performing a task is pretty much the maximum (in general). Having more doesn’t always add to the experience. Ok so Eve Online has 60,000 users on one server but you don’t really get to see them. It is also the exception. A few games have dabbled with FPS scenes with 128 people battling. You battle however is with the few dug in around you seldom the people the other side of the map.
So we have two relative view points. It is odd that the “business” perspective where it is about meetings and communication, sharing ideas and understanding will use the objection of “oh only 50 people?” when in fact the gathering it needs are targeting small groups collaborating on fast ideas. The games industry direction is one of “whats the point having all that freedom and that many people” when that industry is about mass take up and use of produced resources.
If we had the ultimate virtual environment that could house an infinite number of objects and avatars at amazing resolutions would we be any better off in working out how to communicate or play with one one another?
It is the restrictions of the mediums themselves that brings the greatest creativity?
If everyone could log on and all be in one place surely we would still have sparse areas where no one was or over crowded areas where it seemed to busy you would use your client to not bother rendering quite so many people.
I am not saying we should not strive for more and bigger spaces, I would love to see them. I am also not saying lets stick with 32 soldiers in a war torn town. I am saying that everyone should consider the other perspectives of what they are trying to do and why and not discount the other view. The chances are the other view may be what you actually want to do.
I was also struck by scale and relative size whilst wandering around the amazing sand sculptures in portugal. It was like a texture free virtual world.

Each of these is crafted from millions of particles of sand. It is not just a surface rendering, the whole thing is sand right through. No computer style rendering shortcuts. So each particle is a physical pixel, a phixel ? It takes up space and can only be viewed by the number of people who can fit in the space. The constraints of the medium again make it interesting. There is a choice to sculpt in lots of materials, yet this huge gathering chose only to use sand, because of its properties and impact and a craft and talent required to use them.
This all blended in mentally with Minecraft.

The predlets got interested seeing me start to play and actually understand the point and pointlessness of Minecraft.
I was only really doing a few bits and pieces, but once you start digging and rebuilding, combining and digging some more it is (as it success has shown) a totally intriguing experience. I think it works because of its lego like qualities. You are restricted in the medium and in the resources, but challenges you set yourself become very real despite the old school doom like block graphics.
So once again it is all relative, it is not everything to everyone with one perfect ultra solution.
In the mean time it is good news that Intel is pushing forward with 1,000 avatars in a sim (i.e. more than most high end games would dream of going near) as reported here http://www.hypergridbusiness.com/2011/06/1000-avatars-soon-coming-to-a-region-near-you/ and there are some interesting papers on how Opensim scales and works from Intel here

Real to Real world Augmented Reality

As you may have noticed I think we can do more with Augmented Reality than simple data layers on top of the real world. The principles and technology of layering anything from anywhere make some things possible that can really help us.
An example of this concept is the considering the problem of driving along the motorway behind a massive great lorry of van. You keep your distance yet you have no real awareness of whats happening just in front.

We have an augmented reality of a rear view mirror to give us situational awareness of what is behind.
So what would happen whilst in your car you could have an augmented reality forward view, of HUD that showed you what was in front of the obstruction using their view of the world? This is an augmented reality application but not a traditional one. It is a magic lens but it is real world to real world.
It is not without problems of course, how big, how much screen, but it can be based on peer to peer networking between the vehicles. It also has the benefit that when in a traffic jam there is not reason not to zoom you view forward hopping from car to car until you get to the problem in the road.

Claiming back 3D – b.Tween3D

@andypiper pinged me and said was I aware of this upcoming event. I was not but I know if I had of been paying attention I would have been straight on the case for this.
http://btween3d.co.uk/ is an upcoming event, the content of which is being formed right now that is aiming to reclaim 3D and push forward the sort of experiences that we can grow from various combinations of ideas and technologies. Not just sitting with specs on in the cinema.
It is something that I think we need to do, which is why I often finish my talks with the picture of an ecosystem of content and inputs/outputs and transformations.

Virtual worlds and data augmenting physical worlds, augmenting other virtual worlds and also 3d printed goods from environments affecting the interaction in a physical environment which in turn alters a virtual or augmented environment. Its a loop. It challenges where the boundaries of “3d” or immersion are. It stops people thinking that everything is in one place or done one way. It also points the way to a vast set of new uses and interactions and businesses to grow.
As an example, take Alice Taylor / @wonderlandblog and her startup Makieworld. Here Alice is creating a business to allow people to build their toys and 3d print them out. Now what happens when that toy is then part of another digital capture experience, e.g. being held whilst an xbox kinect mocaps the player. The previously digital to physical has become digital again.
It is a fascinating loop, with no particular end point or start point. It has had me pondering it for some time, and each pitch I give it becomes more obvious to me this is where we are heading. It’s not just AR layers, not just games, not just virtual worlds for meetings, not just social media, not just location based services, not just 3 visualization, not just printing a 3d model. It is all of those, with stories and meaning intertwined where we need it as humans to get benefit and pleasure.

3ds StreetPass and pedometers – new social gaming twist

A feature of the new 3ds handheld that does get mentioned but is generally overshadowed by the lenticular lens 3d screen is its network capabilities. StreetPass may seem an odd concept but in terms of ensuring you have the console on your person at all times and powered up it is fascinating.
What StreetPass does is keep the 3ds wireless running whilst in standby but in a low power mode. It looks for other 3ds consoles within about 30 metres of it at any time. Why? Well it does this so that if a fellow 3ds owner passes you in the street the two machines will exchange data with one another whilst sat in you bag or pocket. It is a form of digital business card exchange.
Nintendo have, with this, gone and invented a whole new dynamic to social gaming in a networked world. You often bump into gamers online in games that you play at the times that you play them. i.e. fully distributed networks. With StreetPass you are bumping into gamers at a time and physical place which gives you something in common, and hence makes that person of interest.
What interested me, aside from the tech, was how this felt. Whilst I was in London heading back from the BBC I had my 3ds on standby in my pocket on the tube. I actually got off at a stop I did not need to, in this case picadilly circus, and walked down to leicester square in order to see if my StreetPass would connect with anyone.
This physical activity did in fact yield a few StreetPass exchanges. So when i got back on the tube train I was able to see a few Mii’s (the little avatars you create as a calling card/game character) now residing on my handheld. One was even from someone form Spain. My epredator Mii had also been delivered to them of course. I had, as an experiment also put Feeding Edge Ltd as my greeting so it was a kind of advert !
Mii visitors
These Mii’s also come with things that relate to games, such as a specific piece of a puzzle, or they can be used to drive a mini game. We are only scratching the surface of how this will pan out.
I had been playing a bit of Street Fighter IV which is tremendous game on the DS and has been crafted superbly. Part of that is the ability to win in game figures. These figures form part of a collection that can be traded wirelessly, but more excitingly you are able to assemble a fantasy fighting team from the figures.You are given a certain number of skill and attribute points that you are allowed to spend and then the different figures and characters can be user to create the team. Once created this can be StreetPass enabled. I obviously wandered past a fellow SF IV player as we basically exchanged team data. Once you spark up the game again it asks if you want to run the simulated battle using this data. I did, I won 🙂 in winning I gained some more currency to spend on more in game figures to build the team. It’s almost like play by mail rock/paper/scissors with a huge techie twist.
3ds SF IV team
Just to complete the behaviour altering loop though the 3ds also acts as a pedometer. It counts your steps, the more steps you do the more game coins you are credited with. These game coins can be used to purchase in game items. In the case of SF IV it is, yes you guessed it, more game figures to build the team with.
So, I was carrying a 3ds in my pocket, deliberately walked through the centre of London to see how it worked, gathered some other gamers data, played an offline game with that data which earned me more game points to be able to do that again and the act of walking wash also gaining those game points too.
I think this is going to be a fascinating twist, and may surpass the 3d elements of the nintendo handheld. It has shades of the Wiimote for being genre changing. In a world of online social games they have reintroduced the physical environment to the mix and I am sure some very cool ideas will spring out of that.
(I have not even begun to riff on the AR capabilities which are awesome too, more on that later)

Selling your friends down the digital river

There seems to be a little bubble of interest growing in the social media trading site Empire Avenue. I only really started to take notice of it last night and created a profile and ticker for epredator

It even encourages you to increase you wealth by asking people to share links like this to sign up
I was intrigued as it reminded me of a site I was introduced to way back in 1998 by some American colleagues who said look at this. I was wracking my brains trying to remember it and luckily its still there HSX.com. My id has long since expired on that though it would appear. The aim was to get info about films and stars and to buy into them with pretend shares. As the time it was a very spreadsheet like site but told you about films that were in pre-production and you could place buy orders on them. I do remember I made a fortune though on a relatively unknown Helena Bonham-Carter as her career broke.
Empire avenue alters this formula and makes us all the stars and our activity on social media, as we link our accounts to it, becomes part of our “worth”.
I am always intrigued about the meta stories that can be applied to our web contributions and this is certainly one of them. Buying shares in someone, is that like friending or more “stalking”. Selling shares (not done that yet) is that a snub or just a virtual commercial decision.
It also reminds me of the political web based RPG erepublik where people live within a political system of work and taxes, but gather to be countries and provinces of interest. Rather like minecraft people manufacture virtual goods and produce that are bought and sold on the market. (I had not visited for a while so my citizen had died, but is now back in good health).
All these applications now wire nicely into facebook and twitter as the main substrate, yet do not exist in those environments. It is almost as if twitter and facebook are social media operating systems.
There is certainly more to be done in this area (thats what I am working on now in relative secrecy BTW ) 🙂

Feeding Edge is 2 Years old

Another significant milestone today. Feeding Edge Ltd is now two years old. It is something I am extremely proud about and when I reflect back on this year it has been so varied, there have been some challenges but the worst of those have been resolved. For the most part it has been such an entertaining and stimulating year its hard to think of it all packed into 12 months.
feeding edge 2nd birthday
(When I added the second flame from last years I used Photoshop CS5 puppet warp on the flames, its amazing, it puts a mesh over the part of the image and you edit it like a 3d mesh would)
A year ago I could not have imagined where I am at today. The diversity of which would not have really fitted into any other company. The ability to go with the flow, trust in serendipty and gut feeling has been incredibly useful. If companies let the creativity of their employees flow, rather than focus on control and crackdown then I am sure we would be be generating some fantastic innovations and growth in business. Though, selfishly, if everyone does that then it makes it much harder for me.

So this year I have a few sparklers (though some customer names and projects are not public)

  • Consulted on virtual worlds and games for the government
  • Built a complex system of second life and open sim interactions with drupal and a java model for medical training
  • Toured washing away cave paintings at conferences and gathering all over the UK and elsewhere including Finland and Ireland
  • Appeared on shows in Second Life and given many talks too
  • Started to get the ball rolling as Chairman of the BCS animation and Games SG
  • Been a port of call for references and direction as a virtual world advisor to startups
  • Built a drupal based social and political hub as a proof of concept
  • Review games on Game People like Kinectimals and 3d GT5

  • The ultimate highlights though have to be.

  • Forming the as yet in stealth social games and transmedia company and getting seed funding and filing the patent for the idea.
  • Being given the chance to work on kids TV inspiring the next generation with future technolgy with The Cool Stuff Collective

  • The games company is a mix of having to architect design and direct some development and is very much hands on with the technology. The concept for our first product still amazes me and I am very proud of it. With a bit of luck we will get bigger very soon and we can deliver an even more amazing rendition of the concept, but to my partners in all this I say a huge thank you. I want to write more about what we are doing, but now is not the time or place. I still have a stack of code to write, but my coding partner out there is doing some awesome work making sense of the ideas we come up with for implementation.
    The Cool Stuff Collective has been an amazing journey too from the first conversation about being a technical advisor to being thrown into the studio to present, and now mid way through series 2 yesterday I was out with the crew filming at the Pure Tech racing simulators then dashing down to Intech hands on science centre. Being able to inspire or interest the next generation of techies, and maybe reach some of their parents with tech that is already here but seems like science fiction has been an incredible honour.
    Look at the list of things we have covered

    3d Printing, Haptics, Ardrone, 3d scanning, MMO Lego, AR, Kinect, Mind control, SMARt tables, eReaders, 3D cameras and glasses, Unity3d/evolver games dev, Cloud Computing, Wikipedia, Photoshop, Laser Holographic projection….
    To come is Solar Flares, Opensim and the outside video we have now done indoor skydiving, indoor snowboarding, Racing simulators, science gadgets and planetarium.
    So I have ended up on wikipedia and have over 20 TV records under my belt now. I have a showreel of sorts with its own page here and my new business cards say amongst the blurb TV Presenter. (I think that’s valid now isn’t it?)
    When people ask what it is I do and what Feeding Edge does, I think this does all some up in “Taking a bite out of technology so you don’t have to”.
    I think that because pushing things forward, thinking of the whole not just design not just tech but the social implications of it, but mashing in the fact that things should entertain and engage us as humans is my mission.
    So what does next year bring? Well for me more of the same is the answer.
    I am asked how I have time to do all the things I do. The answer is I don’t. Sometimes things have to slide a bit. Whilst many things seem diverse they are linked. I play games, looking at them for review, to spot trends, to see how things might be used in other gamification contexts and for enjoyment. Then I write about them, present about them and even build them. It’s all part of the flow. The same goes for the other emerging tech. If you are interesting in 3d virtual worlds, then naturally how to create 3d content, how to experience 3d content and how to use 3d environment to reach an audience becomes part of everyday life.
    Then there is the social media side of things. I tweet, blog, share photos on flicker, put game achievements up on facebook and raptr. It is both a personal sharing of whats goidn on to those who need to know or are curious, but it is also a social experiment in how it feels to do these things and the impact it has on my life. Having that personal experience lets me share it with others and with companies and get them to the good part of this communication revolution rather than stagnating.
    People I know often say to me they only understand 1/3 of my tweets. That is great as probably the 1/3 was for their benefit the other 2/3 for others. Mixing busines, social, tech and existence on one channel in 140 characters is still fascinating. It is a microcosm of the whole of what I do with Feeding Edge.
    So to all my customers, partners, competitors, friends, mentors and fellow virtual world evangelists I say a huge thank you for all your support.
    Right, back to it, now what was I do again?

    He has done it again! Telepresence this time

    This guy Johnny Chung Lee, now “rapid evaluator” at Google, is amazing and I love how he approaches things. Way back he did the wiimote hacks that made 3d motion control out of wearing the sensor bar and keeping the wiimotes steady. As Feeding Edge’s tag line is “Taking a bite out of technology so you don’t have to” you can see why I have an affinity for his work!
    This time he has hacked together a $500 telepresence robot. These things seem to keep popping up so he is definitely “bang on trend”. They are a weird combination of a physical avatar used to navigate physical space and “be” somewhere. Like all new ideas they may seem daft, and I did have a “that’s stupid” fleeting momentary thought. However, as I have mentioned before, when I see something and think that I know I have to look further into the idea, especially when serendipity subconsciously shouts out about the subject
    I bumped into this because I was looking at these more commercial telepresence robots from vgocom. This version though uses the IRobot Create (Roomba)

    They had featured a telepresence ER/A&E robot on BBC Horizon and combined with a piece we filmed on wednesday for The Cool Stuff Collective where I was trapped in a video box it all started to link up. Further re-enforced by a conversation about driving robots from a virtual world that started the very same afternoon!
    This has a nice circular element to it in that you will notice from the great Johnny Chung Lee’s blog featuring this quick build is called Procrastineering and the tag line, which is something I live by is “giving in to productive distractions”. It brilliantly sums up the flow of serendipity and the combination of tech and art and ideas mixed with human conversation that seems to lead in a positive direction. I know it is not for everyone and it is a seat of the pants existence but for me it feels right.

    Diminished Reality – S2Ep3

    The current Cool Stuff Collective running this week I got to show a few things. The main theme of the future tech was based on having seen this video about diminished reality (which we did not get to show on the show)

    It struck me that the best way to show this in action, and to explain the complexity of it was to use Photoshop CS5. The full version of photoshop has an amazing content aware fill. This is used to automatically replace something in a picture by calculating what is likely to be behind that object. There have been ways to do this in photoshop manually but it is the simplicity of the process that I found interesting.
    I took a photo of monkey by a patterned wall.
    monkey banana
    In photoshop (live in realtime) I magic wand selected the bright yellow banana and then did a content aware fill and this happened.
    monkeywithoutbanana
    Now this was 3 clicks and almost instant. It is not perfect, there is still a little outline of the banana and the wall is a little wonky, but it is amazing!
    For a joke I then highlighted monkey (I already had selected most of him but not precisely as this was a quick take 🙂 )
    nomonkey
    This again is not perfect but as a proof of what it does it is ideal. I also like the predator like image we generated.
    Part of the piece I also got Sy to look at Wordlens on the iphone. This replaces words that it sees via the camera with other words. Translations, reversals, removals etc. It is a mix of augmented and diminished reality.
    Yes I did do some other things in the show, yes I did put some sports kit on and play Mario Sports against monkey and yes he did beat me at basketball. (My gaming cred is now reset to zero but I will have my revenge Monkey!).
    That will be going into the series 2 showreel I am sure 🙂 checkout showreel one though 🙂

    Having a go with Kinect Hacks

    For reasons that will become apparent in a few weeks time I needed to see if I could get my Mac to talk to the Kinect using the brilliant open source OpenKinect.org. I don’t do too much in the command shell on my Mac so the realms of Homebrew and MacPorts mentioned in the instructions, whilst I knew what the point of them is, meant that my machine was in a bit of a state.
    I had used something call Fink a while back, but could not remember why so I tried the Homebrew instructions but failed and had too many paths and bits not very happy to take what is a ready made package. So instead I went to the MacPorts compile it yourself path.
    http://openkinect.org/wiki/Getting_Started.
    The glview application then ran nicely and told me I had 0 kinects attached to my Mac 🙂
    A prerequisite for this is to have the kinect with a power supply as opposed to bundled newer xbox and kinects where the power is built in. I simply took the Mac to the xbox, unplugged the USB from there and popped it into the Mac, ran glview again as a test. Bingo!
    openkinect
    openkinect
    At its very basic mode you can see the colouring for depth being rendered as the predlets are nearer or further from the device.
    Next step is to hook into the libraries and make sense of the data 🙂

    It is moments away – AR extreme

    I had to repost this video as I think it is brilliant in its extreme view of Augmented Reality. The extremes of a use of tech and how it impact life are were we start to find the new ideas. The diving into the social web mid tea making and then the final part of user preference is really great.
    Of course this is lots of virtual information placed onto a single physical world, but AR is also virtual environment augmentation. Where do we draw the line though?

    Augmented (hyper)Reality: Domestic Robocop from Keiichi Matsuda on Vimeo.

    It was posted on IT World and from a tweet by @mmpow