metaverse


Why talk to an industry analyst?

As an industry analyst at a well known company (i.e. this is not about feeding edge as it is just a place to blog and hold the rights for my novels now) covering all things Metaverse, industrial, enterprise and consumer, I know there are few newer companies in the space that may not know what being an analyst means to them, so here is an updated and repurposed blog post I wrote a few years ago else where about what an industry analyst does and why you might want to talk to us. (about any of our coverage areas not just metaverse). Here is the who, what, where and when raison d’être for industry analysts as I see it.

 

Midjourney Cyberpunk Me

Who?

I been an industry analyst for nearly 8 years, but I have been in the emerging technology industry for well over 30 years as a software engineer and architect and as a writer and presenter. It is not uncommon for an industry analyst to have a lot of field experience, I used to brief analysts during my corporate life as IBM too. Equally many analysts are from a journalistic or statistics background, trained in finding and sharing facts and figures. Of course, there are many personality types within the profession overall, though in general the analyst profession is a people based one developing contacts and relationships across industry areas. Analysts have to take on board a lot of information but cut through the marketing hype to find patterns and facts based on their experience. Not all analysts are going to be long in the tooth like me, but it often helps. In the case of this metaverse wave having been what might now be called an influencer of the 2006 enterprise virtual worlds wave as a metaverse evangelist, putting Wimbledon into second life and developing solutions in the pre IoT connectivity days of things with the then new MQTT, not to mention early days of many current technology trends of web, e-commerce, social media, blogging and personally in gaming. My research agenda primarily covers emerging technology, where everything old is new again. An important thing to add, and certainly one that is true of my group is that analysts maintain integrity and impartiality by being separated from the commercial side of the company, though we do do commercial work, we are not do pay for play where I work. The content of what we cover and say is not based on how much clients pay, nor is it is based on how much the analyst relations department spends on a dinner. Everyone has their own motivation and connection to a subject but, impartiality, trust in treating off the record discussion and, building a reputation is core for our analysts.

What?

Large enterprises and the smallest of start-ups, and everyone in between, share a need to understand the market they are currently in, or look to move to. What is the competition up to? What is resonating in a market? What is going to totally disrupt the current plan? Start-ups need attention and connections to raise funding, enterprises need ongoing growth for shareholders and investors. Analyst companies are there to help provide answers and perspective on these sorts of questions. A large corporate entity may well have a competitive analysis department, but they will be focussed on the other big companies, less the quirky start-up set to make a dent in the industry. Smaller companies are busy just trying to do what they do, and may not have the time to look up and see what is going on around them. Analysts are always across a spectrum of companies, sizes and industry types. Whether it is writing regular short form reports on companies and their latest services, longer form reports across an entire industry sector, running ongoing surveys across thousands of industry types for directional data, custom consulting work, webinars, presentations, offering Merger and Acquisition (M&A) due diligence or just a quick call to answer a client question, analysts offer their considered opinions, backed by experience and data.

One of the other things that is important to consider, especially as a start-up, is to be in the minds of relevant analysts. We talk to people all the time, and suddenly a subject might come up, it may be a complete tangent, which is where we pattern match to say “I saw this really interesting approach from…”. Those conversations might be with VC’s, with companies looking for partners or with potential customers of the company.

Where?

The actual answer to this is anywhere. We take briefings from companies, typically for around 30mins – 1 hour on the phone, over web video conference, in person at trade shows. We try to ensure we do talk to companies, not just look at the web or a press release, as I mentioned above this is a people business. Trade shows such as the huge MWC  bread and butter to analysts and for me Hannover Messe and Augmented World Expo are also important. By Example my experience of MWC in 2019, was 30-minute meetings scheduled with 30 mins walking time between them for 3 days, I logged 10 miles a day walking just in the conference centre. That is a lot of conversations and presentations from a varied set of companies. 

Social media has always been a useful place for me personally and professionally since the early days, and analysts are often to be found there now. I am always willing to hear about interesting things on twitter/X as @epredator, on linkedin https://www.linkedin.com/in/epredator/ (and now mastodon and many other places).

When?

Anytime you need to know something across your industry, or you need the industry to know about something you are doing or about to do that’s when you can benefit from an analyst company, or multiple analysts’ perspective. A start-up may be in stealth, not ready to announce yet, and that is where analyst’s integrity is key, tell us what you are doing off the record and that may well lead to some suggestions that help or when you come out of stealth a better description from us as we share what you are doing.

Whilst we are separated from the commercial side of things, we are aware that what we produce is the core product for the company. Companies pay to access the basic reports, long form reports and detailed survey data is also charged for at a different rate, as is our time on the phone answering questions or doing custom consulting. However, companies telling us things is not something that anyone pays to do, and, in those conversations, we do often share our thoughts, so you can get something from a briefing, no need to just broadcast what you do. A final tip, if you are publicly known, have as much information as possible easily accessible, in a slide deck, or on a website. We all take notes during conversations but being able to look things up after the event is important. Who is the company, where were they founded, not just how cool the product is has great importance, this is a people business.

Metaverse and GenAI webinar for BCS

This month was the AGM for the BCS Animation and Games specialist group that I have been chairing for a very long while now. I gave a presentation from a personal view point (this is not a work presentation and I make that clear in the disclaimers, though it is what I work in too of course), on the advances in Metaverse and GenAI content creation. The full YouTube version is below but the link to the blurb and bio (and the video) at the BCS is here

We are always looking for presenters to come and share some ideas with our specialist group around all things games, animation, metaverse, esports etc, so if you are interested ping me there is a slot waiting for you. We sometimes get a big crowd, other times smaller ones but with the videos published like this it can be a useful thing to do and share.

For those of you who don’t know, BCS (formerly British Computer Society) Chartered Institute for IT is a UK based (but worldwide membership) professional body for anyone in the tech industry. It exists at all levels from just getting going in the business to Fellows with vast amounts of experience and willingness to help. It was part of my professional certification whilst at IBM and I then also became a certifier whilst there too. Volunteering and sharing ideas, such as this presentation, is one of the many ways to get involved (you don’t have to do this). It benefits you as an individual but also elevates tech roles within enterprises and organizations you work in.

You can find more at BCS, The Chartered Institute for IT (bcs.org)

Meta Quest 3 – seems very good

Firstly, yes it looks like its a while since I posted on here, but that’s the challenge when work life and social/hobby life are one and the same thing. I have to save all the really good stuff for my professional output. However, I bought myself what is my 16th VR or AR headset with the Meta Quest 3 and as a long time user of such kit I wanted to share my personal (and only personal) thoughts about this bit of kit.

Quest 3
Quest 3 box
Quest 3
Headset and controllers
Quest 3
Looking like the Aliens from Toy Story 3 with 3 eyes wearing a Quest 3

Our Quest 2 has been kept in our kitchen ready for anyone in the family to use it since it arrived. The kitchen is the only place with just about enough free space to wave arms and make the odd step safely in VR. Though as I may have explained before there is a dent in the fridge from a side fist I did playing Superhot VR when I was training for my 2nd Degree Black Belt in Choi Kwang Do. The Quest 2 went straight to VR, obscuring the world (that was it’s job after all) but the guardian that we manually drew around the kitchen (often repeating that process) tended to keep us in bounds. Picking up the controllers and getting them on the right hands in the right way was always a bit fiddly, like plugging in a USB A and always getting it wrong the first two times despite there only being two ways to do it.

The Quest 3 on the other hand, starts with colour pass through, you put it on (over glasses too as they fit in the headset), and just see a slightly blurry version of where you actually are, its much quicker to get in and stood in the right place. (Quest 2 has pass through but it’s very disorientating and B&W). The new hand controllers are much easier to see and to pick up the right way as they don’t have the massive halo loops of plastic around them for sensors. The headset also does the guardian and scanning around for things on its own, you don’t have to (though you can) draw a line around yourself, which in the Quest 2 seemed like some sort of weird religious or mystical incantation involving a border of salt.

The Quest 3 is also light and smaller than the 2, or at least feels like it, I haven’t weighed them. This makes the whole experience to just get going much less faff an bother. It is a similar leap to when we had to place external sensors and try and line everything up before playing anything.

Going into a VR game of applications is pretty much the same across headsets, though it is noticeably crisper and faster/flashier as all new kit tends to be. Though now games win particular are being upgraded to have richer visuals, which in turn will start slowing it all down again. I plumped for the lower spec memory of 128Gb as mostly the family plays Beat Sabre and Pistol Whip, but I now have some games that seems to be 13Gb so it will eat it up more quickly now.

The Mixed Reality (MR) elements of the Quest 3 blending the pass through view of the world with digital elements anchored in that view is the clever bit. This is augmented reality but calling it mixed reality is as accurate. The view you see is virtual, its a camera showing you the physical world, it is able to scan the environment and build a digital mesh of it knowing what are walls, surfaces and objects in the view. It does in the example first steps app asking you to look around you and you see it overlay what it considers the mesh of the world. It’s very matrix, and deliberately visual, not really any need to show the user other than to intrigue them. The demo app then causes a break in the ceiling with bits falling on the floor and giving a view of a space environment and another planet in the distance. Through this hole drops a spaceship that lands (in my case on the the kitchen floor). Being in pass through makes it easy to walk around not bumping into physical things, as you can see them, so get a better look up into the hole in the ceiling. Over the course of the demo tribble like fluffy things break through the walls and run around you space, they drop behind things like the breakfast bar in our kitchen and you can’t see them unless you go and look over. They creatures also run along the walls and other surfaces. It really is quite effective as a cartoon comedy demo of full AR. as this is not holding up a tablet or phone the field of view and turning of you own head and hence view is 100% natural.

The wonderful old painting application Vermillion had some upgrades, so now in full MR the easel and paint brushes can be in your own room and now even better you can hang you paintings on you physical wall and they will stay there if you want them too. You can see family walking into the room and talk to them, though it’s a little odd for them to know whether you can see them or not, having been used to giving the VR headset a wide birth (as the fridge didn’t above) 🙂

This Quest 3 for the kitchen liberates by Quest 2 for my gaming PC, which in turn retires the tethered Rift/S. It also means we can play multiplayer games with two of us in the house. The main account can share their apps (though only on one headset). I shared the Quest 3 apps and unshared on Quest 2 but we were able (with me logged into Quest 2 and Predlet 2.0 using his account on Quest 3) to play a good few rounds of the VR Walkabout Mini-golf and some multiplayer beat Sabre. I say a good few rounds but the battery life is pretty short on both headsets. This can be sorted by attaching a very long USB cable to the power outlet, but that sort of removes the untethered nature of the headset.

This Quest 3 and pass through is obviously a much cheaper version of what the Apple Vision Pro is aiming to do, but it’s out first, it works and it has a library of interesting things to engage with. Though the really new stuff like Assassin’s Creed and Asgards Wrath II (Remake) are not out at release. So it’s more to play VR things and the odd MR update at the moment. I say out first but pass through has been the mission of commercially available Varjo headsets for a while.

One other thing pass through allows for is being able to see you phone (sort of as its a bit warped). This is very useful as trying to buy things in the store in VR doesn’t work as it needs the phone app to do all the verification. This used to mean taking the headset off, now it means double le tapping the headset for pass through and finding your phone. That’s a silly use case as it should just work in the environment or represent the phone and its OS in the environment, but that’s payment rails for you.

In short, it’s good, worth it and I like it. IO am looking forward to more proper AR/MR apps and experiences, whilst enjoying the brand new ones like Samba De Amigo that just launched (again not new as we played that a lot on the Sega Dreamcast with a set of maraca controllers way back).. and pose….. Happy memories reborn 🙂

Reconfigure – The Movie, nearly

Having my books Reconfigure and Cont3xt I am always using them and the story I created as a way to explore other mediums. I started drawing a few years ago to see if I could maybe go manga with it, that included buying some action figure mini drawing mannequins that have some tech and weapon props. I created some meta humans and have a unity based mocap suit from a kickstarter to see if I could produce anime. Audio voice generation has been another thing I tried. Each of these projects would be a significant amount of time, effort and probably cost too, but delving in as a mini proof of concept gives me an appreciation of what would be needed.

Midjourney and the AI image generators have offered another option to explore, an very quickly too. One of the challenges is to get the same character to appear in each generated image (i.e. story board). I decided to sort that out later in this POC. Though I did suggest that the actress to play Roisin should be Ruby Rose to get a potential consistent likeness.

Midjourney experiments
Roisin at the start of Reconfigure in a bit of a last minute situation getting off the grid

Having got this image, which was pretty good for a first attempt, with very little text in the prompt I popped over to runway.ml and tried its new gen-2 image to video generation. This time with NO PROMPT at at all! It decided she should look up and stare into the distance, which in the book she is doing as she is being chased and very aware of her surroundings as she tries to outwit those after the amazing tech she has discovered. They only generate short clips on the free accounts, but those few seconds I found to be quite astounding and bewitching.

GenAI is obviously on a major drive into all sorts of areas, but the ability to potentially use tools like this to create a film version of my books on my own time and budget is emerging as a strong contender, unless Netflix or Prime decide they would like to do it instead 🙂

EyeBlend better than an Apple Vision Pro – still

In 2016 I wrote Cont3xt the follow up to Reconfigure and part of that was an upgrade our lead character gets. She has access to an API that can manipulate the physical world like the virtual, she builds that up into a Unity application for her smart phone in the first book. The second she gets an EyeBlend headset, state of the art (and still just a little ahead of Apple and co’s attempts. So she ports the Unity app and upgrades the messaging system for this high bandwidth device and to enhance the experience. Rather like all the developers are doing right at the moment. Life imitates art 🙂 Read Reconfigure first, but this might be a topical time to check out Cont3xt whilst you wait for the Apple demo in a local store next year. (Just £0.99 each for these adventures 🙂 )

“An EyeBlend? They are not on general release yet? How did you get that?” Her eyes lit up wildly as Roisin took in the news. 

“Oh you know, I have a few friends who hack for a living.” Alex replied with an air of comedy superiority. 

“You sure do! The EyeBlend is an awesome piece of kit. It does full blended reality applications. It uses field effect projection with live object tracking!” She said enthusiastically. “They call it a hologram, though it really isn’t as such. You can forgive that though as the effect is the same, apparently.” Roisin was on her tech turf again. 

What does it take to make the long running Games At Work dot Biz podcast or any other for that matter?

I had the honour of hosting my good friends from Gamesatwork.biz for a BCS animation and games event to hear about how they have managed to create compelling content in a weekly podcast for what is not approaching 400 episodes! The official BCS event link page and links to everything else about the BCS is here but I have added the youtube directly. Its very free form, not scripted and was a entertaining to be the interviewer, especially as I know Michael, Michael and Andy just know what to say and when to say it.

BCS Animation and Games hosts Games at work dot biz team for a free form chat about the show

As the guys explain a lot of the content is based on us all sending them things we are spotting or are busy doing and enjoying. All the podcasts episodes have their own show notes and links so after watching this go check those out and see the sort of thing they do, or vice versa and then start sharing stuff with them, the more the merrier. Also if you are in tech, check out the BCS as a professional org.

Communicating through avatar size

It always amazes me how there is always something to learn or be surprised about tech, you think you have a handle on it and then you see or somehow sense something else. I recently got hit with a thought about the relative sizes of an avatar in a shared virtual space/metaverse application and what that can be used for. I have often talked about the scale of virtual worlds being a mutable thing, we can be sub atomic or at the scale of a universe in a virtual environment. I have also talked about being the room or place, i.e. a sort of dungeon master approach controlling the room (as a version of an avatar) whilst others experience something about that place.

Avatars of all sizes

The first experience that got me thinking about relative avatar size was getting to play Beat Saber as a 3 player VR game. I had done 2 player, here you see your fellow player off to the side facing you but experiencing the same song and blocks. You are so busy playing and concentrating, but it is a good experience to be shared. However once over 2 people, at 3 in this case, you are all on your own track but facing a common central point, like being on a star. The player that is doing best has their avatar zoomed and projected up towards that central point as a reminder you need to work harder. Its a great mechanic and is then using the avatar size as a communication mechanism, i.e. they are better at Beat Saber than you.

The next experience was to properly play the VR dungeon crawling turn based dice and card Demeo as a multiplayer. This game has you and your friends gathered around a board decked with figurines. You are represented by you hands, to pick up the pieces and moves them, or to roll the dice, and by a set of glasses or small mask. The avatar is not a whole thing, no need for legs etc. The board is the thing you all want to see. Each player can change not only the direction of the board just for them, by dragging it around but also zoom in and out to see the lay of the lay or get right in and look at how the characters have been digitally painted. The game is collaborative and turn based, and you get to see the other players had and mask avatars. Here though is the twist, you also get a sense of whether the other person is stood looking around the table top or if they have zoomed close because the avatar you see of them scales up and down in size according to their view of the world. Not only can you see the direction they are looking but how detailed their view might be. If you are zoomed in and you look up you see the giant avatar of your fellow player looking at you. This is all very fluid, gameplay is not messed up by it, and it shows a level of body language that only really exists in metaverse style applications. The VR element makes it feel even better but the effect works on 2d screens too.

We often talk about sense of presence, remember you were next to someone in a virtual place, but this is another dynamic that is obvious, but only obvious now having experienced it. Anyway, that’s the thing I learned earlier this week. Also in order to find a picture to express the image above started as one DALL-E sentence for the AI generation, but I also tinkered around with the new web editing that lets you add new frames of generation to existing images, creating AI generated composites!. The stately home was an accident, but somehow looks very much like my old 2006 metaverse evangelist physical and virtual home of IBM Hursley. The ability to create an image and a feeling for this post, not just grab a screenshot for the games (which is always tricky to get what you want anyway when it involves other players), is also rather cool and is the first time I have created an image for a subject, rather than the subject be image generation. The future is arriving fast!

Anyway, avatars, not always what you think they are are they? 🙂

Digital humans evolve

Way back in the dim and distant era of 2009 I was exploring a lot of tools to help me build virtual environments with avatars and characters that could be animated, typically in Unity. 3D modelling is an art in its right and then rigging those models for animation and applying animations to those is another set of skills too. At the time I was exploring Evolver, which in the end was bought by Autodesk in 2011. A decade on and there is a new kid on the block from Epic/Unreal called Metahuman. This cloud based application (where the user interface is also a cloud streamed one) runs in a browser and produces incredibly detailed digital humans. The animation rigging of the body is joined by very detailed facial feature rigging allow these to be controlled with full motion capture live in the development environment of Unreal. Having mainly used Unity there is a lot of similarity in the high level workflow experience of the two, as they are the leading approaches to assembling all sorts of game, film and XR content. However there was a bit of a leaning curve.

I decided to generate a couple of characters and ended up making what to me feels like Roisin and the Commander from my Reconfigure novels. Creating them in the tool plays instant animations and expressions on the faces and is really quite surreal an experience. I installed Unreal on my gaming rig with its RTX gfx card and all teh trimmings and set about seeing what I needed to do to get my characters working.

First there is an essential application called Quixel Bridge that would have been really handy a decade ago as it brokers the transfer of file formats between systems, despite standards being in place there are some quirks when you move complex 3D rigged assets around. Quixel can log directly into the metahuman repository and there is a specific plugin for the editor to import the assets to Unreal. Things in Unreal have a data and control structure called a blueprint that is a kind of configuration and flow model that can be used in a no-code (but still complex) way to get things working. You can still write c++ is needed of course.

My first few attempts to get Roisin to download failed as the beta was clearly very popular. I only took a photo of the screen not a screen cap, a bit low quality but there is more to come.

Metahumans

However, eventually I got a download and the asset was there and ready to go. Unreal has a demo application with two MetaHumans in it showing animation and lip synching and some nice camera work. Running this on my machine was a sudden rush to the future from my previous efforts with a decade old tech for things such as my virtual hospitals and the stuff on the TV slot I had back then.

Roisin ?
Roisin from Reconfigure and Cont3xt in Unreal engine

The original demo video went like this

Dropping into the edits and after a lot of shader compilation I swapped Rosin with the first metahuman by matching the location coordinates and moving the original away. Then in the Sequence controller, the film events list I swapped the target actor from the original to mine and away we go.

This was recorded without the sound as I was just getting to grips with how to render a movie rather than play or compile an application to then screen cap instead. Short and sweet but proves it works. A very cool bit of tech.

I also ran the still image through the amusing AI face animation app Wombo.AI this animates stills rather than the above which is animating the actual 3d model. I am not posting that as they are short audio clips of songs and teh old DMCA takedown bots end to get annoyed at such things.

Now I have plan/pipe dream to see if I can animate some scenes from the books, if not make the whole damn movie/tv series 🙂 There are lots of assets to try and combine in this generation of power tooling. I also had a go at this tutorial, one of many that shows the use of a Live facial animation capture via an iPhone streamed to the metahuman model. I will spare Roisin public humiliation of sounded like me and instead leave it to the tutorial video for anyone wishing to try such a thing.

Lets see where this takes us all 🙂

Washington 3DTLC, twitter, education and progress

I just got back from my trip to Washington to the very excellent 3DTLC conference. This was certainly one of the most productive, interesting and stimulating conferences we have had over the last 3 years. Primarily it was because it was not a conference to convince people of the value of virtual worlds, but to look at ways to move everything forward from that point. The focus being Training, Learning and Education took some of the more traditional focus of “hey how do we turn a fast buck from this” (a.k.a ROI) from the discussions and hit us all with mix of case studies of things done both in enterprise and academia with some future thinking and discussion.
IMGP3665

The entire conference was also heavily powered by the twitter back channel #3dtlc was getting a lot of traffic. This should be no suprise from virtual world experts used to attending virtual events, but maintaining an engaged buzz through ongoing discussion. (Apologies to all those not embroiled in this world who happened to experience this all via any facebook flood feeds). So whilst we had no avatars textured or untextured (as above) we certainly all had a virtual presence. The conference was not being streamed to the web as audio or video, but as a considerably set of 140 character conversations crowdsourced and minuting the proceedings. 

The location was Gaulledet University, a 99 acre site about 2 miles from Capitol Hill. A very grand campus, though contrasted outside by a slightly different area that many taxi’s seemed unwilling to come to. 
Galladet University
Outskirts
However that did not detract from the content.

Tony O’Driscoll had done a great job of getting the agenda together and inviting people. It was Tony’s request that had persuaded me to spend some of Feeding Edge’s travel budget to get over to Washington.

Analyze, Leverage, Teach, Learn, Design, Explore, Improve, Connect, Extend, Convince where the topic subjects for each panel or interview.

The first panel Analyze had a set of positioning statements about where were are as an industry. Kicked off by Erica Driver from ThinkBalm. The general consensus was that we were still at the early stages. Steve Prentice of Gartner re-iterated the important fact that this is all about people, unlike many other technology adoption curves, and that there was more than one adoption curve interacting here. Several other themes emerged that may or may not be considered a distraction. 

1. What do we call this all – Everyone has a name for this, metaverse, immersive, 3.0 etc. Though as Erica pointed out we started with e-commerce, then moved to e-business now no one really talks about it as a thing anymore it is just part of general business. The same will happen to the thread so this immersive revolution.

2. The G word. There was a polarization into whether or not anything or anyone should ever use the word GAME. This again highlighted the point above on clarification of what “this” is. In some cases using games and games technology makes sense, in others it is not going to get anywhere in the adoption curve for some of the purse string holders. There is no one answer. Though I must say that when I started in the IT industry 20 years ago it was frowned upon, even as a software engineer to have the word Game on your CV. Now it is very prominent on mine an part and parcel of any discussions and work I do. That does not mean every solution to every problem has to be a game based one though. 

The Leverage section was an interview with Joe Little from BP. Joe has managed to carve out a good official and respected position with BP providing a whole set of examples of how they have used both virtual world and visualization technology in a host of applications. An interesting one was around building a team of students up as a team before going on a physically demanding exploration to the antarctic. The virtual world elements let people get to know one another first, something a few of us have done so we know its not an isolated case. Joe’s examples were not all one platform or one example but a spread, and for any doubters in the audience (though they were an experience bunch) it provided a safe grounding.

Teach, saw a fantastic set of panelists who had between them enormous experience as educators and virtual world experts in the field doing, trying, observing adjusting and reporting what they do. Pedagogy started to be mentioned alot. Challenging the fact that education has up to know been restrictive in its approach, and that simply replicating boring broadcast lectures in a virtual world will not work any better. Students asleep at a real desk will still be asleep at a virtual desk. The famous Sarah Robbins a.k.a Intellagirl told a fantastic story around letting students get to safely understand social exclusion and discrimination through Second Life. In here example she explained how she set her students a task to go to a Second Life night club, but they all had to be as outsized round spherical avatars (Kool-aid men). Their individual experiences did not really include much societal diversity. They re-appeared form the club 5 minutes after going. Intellagirl was not sure why they were so quick, but it turned out that they all turned up, and were very quickly politely ejected for not fitting the social norm of the venue. Each of the students was able to explain their feelings of exclusion and appreciation of the potential problems that attitudes can have on people. As she pointed out it would be very hard to compare that educational experience with any other way of doing it quite so safely. Hence the ongoing conversational them about “justify why a virtual world” is often used as a weapon to stop change usually by trying to make a comparison to things we can already do. This being an example of something we cant do usually, but had clear benefit in discovering this way of working. 

Design as a panel showed some interesting ways of getting things done in virtual worlds. I missed some of the panel but did see Boris Kizelshteyn of Popcha! get a admiring round of applause for diving into a live Second Life demo of their presentation toolset and also bootlegging the conference by using video streaming from his webcam back into Second Life. That rounded the day off nicely from the main conference.

The evening for me was mostly spent at a ThinkBalm event with lots of the drinks bought by Erica and Sam Driver. The conversation was varied and very interesting. We talked lots about Augmenting Reality in various forms.
RoadmapDay 2 started with the panel I was speaking on. Explore. With America clearly in a mood for change and for the future the main focus of the session was explaining the Virtual Worlds Roadmap. (An excuse to use a photo of the capitol building) Sibley had managed to get to the venue after some major knee surgery and was sporting not only his trademark hat, but a very impressive leg brace. Koreen was our MC, and set us off on our way. 

Sibley explained the premise of the roadmap, that many of the things we are doing today with virtual worlds, we could have done a long time ago. (For me this resonates with the comments from the first session where there are multiple adoption curves and people are a major factor in all this). The roadmap, as Sibley pointed out, is there to break things into industry verticals, to evaluate what we can and can’t do to meet those needs and then work out how to get the gaps plugged. 

Next up on intro’s was me, I first of all made it clear that I was still epredator, but I was not still with IBM, just in case anyone had not heard the news. I made three points about the future.

1. I explained that we were washing away cave paintings, that up until now everything we have done has been broadcast media, that telling people where the hunting grounds were, where the dangers were etc was replicated in how we do books, tv, film etc. Its also how we do education in a classroom (Linking back to the teach panel). It was a 30 second version of the keynote from ACE in ireland. 

2. There is no one platform to solve all the problems, nor is there likely to be. It is not a case of sticking to or picking one deployment platform and then standing back. I mentioned the difference in need in something like The Coaches Centre that doing a classroom education of some standard texts requires very different technology to the fluid dynamics to coach swimming, or the physics in tennis, or the physiological models to teach doctors. Its not one simulation, not one virtual world, not one platform as much as its not one industry vertical. 

3. Assuming the above is the case, things will need to interoperate in ways that suit us as people. We need ways to describe and combine education and training plans that span all types of media and place. This is not simply a case of can x import model from y. It is more about the meta description of how we combine the appropriate levels of simulation, interaction and content from the right places. I added that we need to think how we not only augment the physical world with the virtual but how we augment the virtual worlds with other elements of the virtual. 

John then talked about his vision of high end simulations, where and why Intel are interested and also as a co-founder with Sibley, reiterated the depth of the work in the roadmap and that it was for everyone to join in and sort out. 

Eiliff then also introduced the deeper elements needed for people to be able to learn by exploration, once again re-iterating the elements of learning dont just come from re-replicating the classroom and that we can take this further than ever now.

The rest of the panel time was in various discussions that built on and around these points. I put it to the panel and audience that I think that the platform and  tech discussions will get blown away when people from the games industry (with their budgets and huge infrastructures) figure out that benefits in business bring a whole new dimension and scale to this. Sibley pointed out they probably would not as they were comfortable where they were. I agreed, but suggested that a drop in gaming revenue, or some industry pressures might make them look elsewhere, or the proliferation of easy to get middleware and game creation tools such as Unity3d might make lots of game related startups adjust their focus. 

At some point I found myself singing the praises of IBM’s Sametime 3D and in particular how in the adoption of some of this way of working just putting it into the normal workflow of everyday life just makes the challenges and barriers melt away. The full on product development cycle merging enterprise instant messaging so that it can, as needed, allowing anyone to spark up an opensim room so say “let me show you” action some things and then close and move on needs someone like IBM to deliver it. 

Next up was Improve. A panel led by Erica Driver. Once again this was real projects getting to be explained. From J&J recruiting, to SAP’s approach to total integration with physical assets (Shaspa got a good few mentions here), to Robin Williams from Sun explaining the multitude of good work in both SL and Wonderland (that we are all praying does not get killed by the Oracle monolith) and finally ending with Kevyn Renner from Chevron who won the award for the sparkiest most dynamic delivery of the whole show. Chevron using all sorts of 3d for years, but increasingly using collaborative immersive 3d real world training and safety issues. Twitter was particularly alight with surprise as he got off stage and wandered around with a glass in his hand to demonstrate a “thing” whilst having a pop at the Australians (being from New Zealand). It was a great pitch and the panel also dealt with lots of questions. Robin was particularly on form with very similar response to the typical questions that saw me nodding profusely.

Connect was the next panel up. Sam Driver was leading, first up was Robert Broomfield from Metanomics fame. He explained much of how he fell into being the chat show star in Second Life, how the show grew and how now with Treet.tv it is growing further and being injected into more environments in more ways. I like this point a lot, as it fitted with my panel’s conversations around not just having one environment. 

Next up Joanne Martin, President of the IBM Academy of Technology. Joanne explained the AoT and the fact she is in one of the few elected positions in the company. The Academy is IBM’s technically focused leading thinkers. The case study was one that Joanne admitted when she started she knew nothing about virtual worlds (I must have missed one in my conversions 🙂 ) but that when the great and good from IBM had their massive yearly conference cancelled almost at the last minute due to money concerns they had to come up with something quick. The AoT meeting was the first foray with the original behind the firewall solution with Linden Lab and Second Life. Joanne pointed out they used lots of other online connections, webcasts, wiki’s etc, but that the Virtual world had a massive benefit of serendipitous conversations and mingling with fellow attendees. This whole thing is the case study written up by IBM and LL recently. I had always known the AoT would be key in adoption.

The recipe

1.Take a bunch of talented, but sometimes stuck in their ways old hands with a hint of the mad inventor in them.

2. Take away a long cherished physical gathering due to budget.

3. Put some new ways of working in front of them all. 

4. If it works completely, the key influencers will say why did we not do this before

5. If it works a little bit, the key influencers will say, why could we not do this better?

6. If it breaks, they will say heck we are IBM surely we can make this work, get out of the way let me try.

End result, adoption, products, industry.

Of course for them even to get to that point I know full well it needed demand generation and evangelizing in the lead up, otherwise things like a behind the firewall virtual world would not have even got a look in. To think some people told me there was no business value in virtual worlds, and now the president of the AoT was on stage saying precisely the opposite. 

Claire from Cisco told a similar tale, slightly different technologies used sometimes (they do own webex). Of course again this was interesting that my fellow opposite (though slightly higher) number Christian Renaud had shepherded Cisco to a similar point. 

Jaque divided the audience with a delivery about a volunteer virtual police and security force initially in Activeworlds but spanning into other places too. The questions raised of who watches the watchmen? Whether it is ok to need a police force or if we are all self governing certainly caused a ripple on twitter. All the things Jaque said I think I have experienced, and in some cases had to do. Talk down a potential griefer etc. Yet at the same time, a self appointed set of police seemed odd. A subject that will certainly keep on getting attention.

The penultimate panel was Extend. Tony was the moderator. There was quite a mix on the panel from the extremely touching and emotive work from the US Holocaust Museum, to the outreach to industry analysts on products from Sun, to KPMG’s recruiting fairs initially in Second Life then more lately in Unisfair and finally Tim from Pearson talking about his customer work with ECS as a partner for The Coaches Centre.
IMGP3609 

The Coaches Centre is something that I am very close too especially with my background with sports and virtual worlds with Wimbledon, so it was interesting hearing a partner/provider talk about it when ideally it should have been TCC themselves. The basics of TCC is to improve the quality and experience of sports coaches at all levels using new and immersive technology. Currently the pilot with Pearson/ECS as content providers is running in Canada, but it is just the first step in something much grander that I can see will blossom from this, and as I mentioned in my panel an extend across many platforms and could aggregate and augment all sorts of elements of social and simulation technology.

The final panel was Convince. A body of fellow metaverse evangelists sharing their insights into how to get buy in where needed. Karen Keeter from IBM spoke and as a fellow colleague it was very familiar 🙂 Karen also showed Sametime 3d. John from Intel, Emily from Cisco and Brian form Etape also spoke. 

John Hengeveld was also on my panel. His main tips were that each project has an impact so make sure you capture it (positive or negative) 

Emily was indicating her sales force just would not for anything that was not bullet proof and obvious to use. 

Brian rounded up the discussion by saying how he came at this from customer need, not out of a passion for the ideas. This was a great matter of fact pitch. They needed something, we tried virtual worlds, and wow we got so much more than we bargained for.

The conference then wrapped up, with a big round of applause for the organization and content. 

A few of us twitterers then retired to an Irish Pub for Guiness and chat, but thats another post and this was way too long.

It really is that simple, virtual world whiteboards and diagrams

As a first bite on technology so you dont have too I thought this subject would be a good place to start.
A common theme I often come across is one of the need to deal with certain real world applications in a virtual world. As with all things related to mirror worlds the assumption often has to start at a replication of exactly the way we currently do something. In some ways it could be considered a challenge to virtual worlds that they are made as effortless as a blackboard and chalk are in todays class rooms. The pencil and a piece of paper has done us proud over many years. There is a place for a simple pen outline or diagram, a chalk and talk to a sports team, a hey guys I have a great idea look at this etc. Those work really well in real rooms, where people just grab whatever they have to hand, adjust to their surroundings and draw and share. At the moment virtual worlds do make that quite tricky. Intricate details take time. We have pens and touch tablets, screen sharing etc and it is of course possible to inject that into many virtual worlds in various ways.

However, I want people to go back and think this over. Even with the metaverse technologies we have today we already have very effective and simple tools often built in, that at almost no cost can be used to get a point across. More than one person present at a place in a virtual environment able to see the ideas form and mental placeholders moved around them as the person explaining the idea manipulates the environment. 

I produced this video, a live one take event. The aim to show that if I wanted to explain something, in this case it could even be a sports play of some sort, I could with no frills, no fancy touches show my ideas. The example shows using Second Life or Opensim (though anywhere you can rez anything this would apply) creating objects, moving them around, assigning meaning through colour and position and manipulating the environment live can be done simply and to great effect. 

The idea is to understand these basics and be then able to visualize the potential. If with a little more effort these movements are recorded and replayed back, interchange with other services, if the blocks are replaced with more detailed meaningful models, if simpler interfaces make moving the pieces even easier, if weight and friction and physics are used they all start to enhance the basic feature. One that is an environment that is adjustable and not constrained by the limits of a whiteboard, or piece of paper or laptop presentation.

The takeaway from this is that there are many ways already to use a metaverse platform to exchange ideas. It can be as simple as this. In a given situation this may be enough to sketch a solution. This basic experience can be made much more visually stimulating, if it needs it, but it required no buildings, places, presences, branded experiences to get this across in 2 minutes. Get some people together, just try it and see if it works.

****UPDATE In order to understand and actually visualize how these things can work when directed at a particular sport and with gaming console technology you can see what happens here in a fan video for Madden 09, using the tools available to describe American Football offence tactics.