Once again for my pitch at metameets I used a little bit of virtual world technology as a mental aid to help me figure out what it was I was going to say and in what order. I am finding thins an increasingly useful technique, so as this is the second time around for it I thought I would explain it again.
I created the presentation in the usual fashion a set of keynote/powerpoint slides, mostly because I had ones from other presentations that I wanted to thread in and re-use. Once I had the single threaded narrative sorted, and the slide in the right order a structure, for what is in effect a story, appeared.
In order to preserve this structure I took images from all the slides and put them into my local Opensim.
Once in there these simple flat panels take on a relationship to one another that just does not happen when you are flicking from slide to slide.
In the picture above the pitch really started at the back. The back row is the underpinning, an introduction. Seven related slides to go through at any pace, but ideally quite quickly.
Next is a challenging step forward, as the next row comes forward. To support that is a another row, which is held to the side a little as it is really optional. In this case its a slide about how it feels to attach things to your avatar. That allows free expression, no script but just a memory of a feeling.
Then the next row is group into three groups. This is a change of pace, a set of ideas with an example in each of the three.
Then the penultimate row as you come forward is the tying it all together picture. Ideally by the time I get here the other 3 ideas have started to make a little sense.
Then there is the final slide to finish on.
Rather than sit and read the slides or over rehearse I just had this image on my iphone of the layout. It is enough that I can see the sort of pages I had created and when presenting I can think where I am in the flow. So if there is some sort of distruption, or an idea that comes to mind whilst talking I can think quickly as to where it will fit in or if it has missed its chance.
This is really just storyboarding meets mindmapping, but the multiple dimensions of it and the visual memory of having been there is incredibly powerful and I recommend anyone try and structure some thoughts or ideas or keypoints in some way like this to see how it feels.
Does anybody else do this, I would love to hear stories about it ?
This line:
“This is really just storyboarding meets mindmapping, but the multiple dimensions of it and the visual memory of having been there is incredibly powerful…”
Not “seen it”, not “rehearsed it”, etc. but BEEN THERE… Wow.
I know there are tools for bringing PowerPoint into SL.
I’ve been to many presentations where they are used, but have never used PowerPoint inside SL myself.
What is the name of the script/app that you use?
Could you message it to me next time you’re in-world?
SL: ChristopherBest Daviau
Is there such a tool for Apple’s Keynote?
Of course, I could always just export out as a PowerPoint.
Would be great to meet you in-world at some point to observe your process.
I’ve been working on making videos out of a multi-part Augmented Reality presentation so this really hits close to home for where my head is at right now.
It is a 6 part series, and I have so far made videos for three. I will shamelessly plug them here, if you want to check them out (would actually love to have you thoughts). The series is called Augmented Reality & Emerging Technologies. These are the three I’ve produced thus far:
3.) Mobile Augmented Reality, The Ultimate Game Changer
2.) Augmented Reality On The Desktop
1.) Augmented Reality in Context
I will check those out Chris 🙂
I don’t actually use any specific automation here. As there is no nomenclature to describe the flow of the keynote I just export the whole thing as images and slap them as textures on a board.
I then drag those board around looking for the best layout or pattern.
With the previous presentation it is the one thats open in IQ sim that I left the slides layed out in the round as an exhibit, but then started to build more SL like 3d versions of the points in the slides.
Again its all very human driven. I can see soem tools could arise, and in particular to be able to organize the pitch in an AR format to be able to see it in the room that you are actually in rather than a mental copy held in opensim.
We can certainly meet in world and I can explain (once I get home ash cloud willing)
Pingback: MixedRealities :: Building another kind of office
Ian,
I was looking over this and it strikes me as a 3d form of the VUE – Visual Understanding Environment out of Tufts University. http://vue.tufts.edu/
I would find it interesting to couple your work together with the VUE.
Or couple VUE within an augmented reality application.
Does the iPad have a camera and run Java? 🙂