The STEMnet programme here in the UK enables people who are interested in sharing their expertise in STEM subjects with the next generation are given some support to do so. Schools ask their local STEMnet coordinator for help and volunteers step up and go along.
Yesterday at the Intech science centre (our local hub for STEMnet) I helped run an ambassadors intro into Arduino (and also a little bit of Scratch too).
Intech had bought 8 arduino starter kits. These are fantastic combinations of components and projects that have now become more official within the arduino community.
The packaging and collection of it is very professional and whilst still all based on open source it provide way more than I was expecting in the pack. Before seeing the kit I has thought we could start with the basics of the blinking light (a standard piece of code) using the onboard LED, then build the flashing light, then cut and paste and build 2. Basically following the 3 minute piece we did on Cool Stuff Collective
The basic presentation I used was this one, it was not designed to be overly pretty as it was just a catalyst to get things going.
I also did not know what the experience of each of the attendees was likely to be. As it was we had a great and helpful mix of people who knew lots, and were very advanced hardware engineers, some traditional IT professionals and programmers and some very enthusiastic newcomers to the subject (but technically literate)
***update 13:23 14/2 (Just uploading again as it seem the Slideshare conversion repeated some words and removed others!)
The aim of the pitch was to suggest the basics (inspired by the basics in Choi Kwang Do)
I thought most kids would want to be into programming initially because of games. There is an instant disconnect between seeing all the code and the effort for a AAA title that can be quite off putting.
So I settled on the light switch as a real world blended reality example. Layering on that the Arduino is a swicth we can control, but that the basics of input, process, output or sense, decide, respond are the fundamentals of everything. So if you get a basic piece of the experience dealing with getting some user input, deciding what to do and then making a change with an output you cover an awful lot of ground very quickly.
Very often as techies and engineers we all see the intricate detail, we become very aware of everything that we don’t know, how complex details can be. However if we treat the knowledge as a set of repeating patterns, like a fractal image we can talk about a basic building block without worrying about the complexity of the overall picture. After all you have to start somewhere.
Anyway, a huge thankyou to Sarah at Intech for hosting and for getting all the kit and for asking me to help in the first place 🙂 A huge thankyou to the group of Ambassadors that braved the potential snowstorm and dived and all had a go and got everything working in the few hours we had. It helped to debug what we need to tell students and other ambassadors.
I tweeted about this the other, but after it came up in the Q&A session at yesterdays blended reality pitch I realized I had not put any more here about this interesting device.
The QUMARION is rather like the posable wooden mannequins that artist use to practice drawing figures
It is instead fully instrumented with sensors to work with a digital description of a human skeleton.
So as you pose the figure that translates to poses in the 3d modelling package.
A purist 3d designer may regard that as undermining their skills with manipulating and understanding the interface on a 2d screen. However this came up as an answer to a question about blended reality as I was talking about how sometime the technology can get in teh way, other times it disappears and lets us use what to know to enhance an experience.
The QMARION is rather like using the real guitar in Rocksmith, it may be an appropriate tool for understanding and communicating with an application.
I know that when I use 3d packages there is a barrier in having to deal with a mental translation of a 2d representation. Being able to just pose a physical device and explain what is needed physically would work for me.
A long while ago I was trying to make some tennis animations for a well known Second Life project. I found myself standing and looking in a mirror, performing the action then sitting down making that action work on a very simple digital rig, but then I had to tune it so that it looked better for the screen. I had no motion capture which would obviously have helped in the first place, but it is the extra artistic interpretation and subtle tweaks that it would have helped a great deal to have had a hands on device to help.
Now this device is only input as far as I know so there is an obvious extension in using it as an output device too. If I mocap a move, but then the device can play that back in physical steps and frames then I could tweak and enhance it. Obviously in games there are some moves that just don’t exist, you cant get certain flips and jumps happening. You can however start with a basis of what you can do.
Again of course this relates to studying the forms in Choi Kwang Do. A physical, but digitally recorded recreation may help someone even more to understand. Also a mannequin can be made to hold a position that may be a transfer on one move to another that a person bound by the law of physics cannot. It becomes a physical pause button.
Another extension to the idea is that this restricts you to one rig. A component model that lets you build any size or shape of joints to create the armature for any creation would be incredible. Combine that with the ability to 3d print the components in the first place, but them together, have that create the rigged model and then animate away. There are some fantastic opportunities for people to create interesting things as this approach evolves.
I am a big fan of Skylanders and their blending of physical and virtual play and it seems that they just keep coming up with new things. Last year we had Skylander Giants with new larger characters that also lit up (using induction current from the portal that you place them on). This year they have taken the physical side of things a bit further with this.
Swap Force makes the characters have interchangeable components. You pick a set of legs and a top and plug them together. This makes for many more combinations and ways to play. It is of course ramping up the character collectible side of things (as we discussed in Wesley’s podcast) and the “need” to buy more things, but they are toys that kids can play with.
Maybe they are getting one step closer to me be able to print the extra pieces that I win in game?
It is a couple of years since the first Skylanders game arrived that blended physical character collectibles with on screen action. I first got interested in it from the vector it was plotting with being able to potentially create new physical toy changes with 3d printing and it also led to showing this advance in technology and games on the TV in series 3 of Cool Stuff Collective. This also made it’s way into my academic paper 🙂
Disney have now picked up on the technology with Disney Infinity. The official trailer shows a lot of game play with a few references to it’s version of the portal of power and the range of figures.
Here Andy @GeekDadGamer Robertson has got some great interviews and exclusives. We have both talked about Skylanders and alike in various ways including sharing a podcast on the subject 🙂
Of course as with many technologies getting a physcial object of some sort to be part of the game play is not brand new. The usual Gartner adoption curve applies. Someone invents something, it generally will fail first time until everything else catches up, new variants on other tech evolve and become accessible and social changes occur. One of the early ones that you may remember was R.O.B. also toys that interacted with TV that were before their time such as Captain Power
This blending is occurring more than ever, not just character collectible toys but apps on our second screens even Antiques Roadshow has got in on the act. The way we interact with content is changing drastically. Will everyone be able to keep up? Lets see 🙂
Recently I wrote about how a microsoft patent around kinect room scanning and projection looked like a holodeck in flush magazine it also ties in nicely with how kinect party does some interesting person removal already. Now Microsoft research have released this proof of concept video that starts to show the sort of blended reality they are heading towards with Illumiroom. A while back philips had a TV that had a lighting system that produced ambient lighting based on what was on screen but this…. is something different. It requires a projector understand the content on screen and the geometry of the room to enable the projector for peripheral images to make the image look like it works
My physical copy of the Computer Networks journal that contained my first academic paper arrived through the post.
With my usual slightly sideways look at such things used my Cool Stuff Collective G33k t-shirt as the profile shot 🙂
I am not sure if this is unusual for an academic paper but I managed to get all my favourite subjects in, virtual worlds, 3d printing and even Skylanders gets a mention.
I am sure I will get around to updating my old fashioned CV should anyone ever need my services but for now its on the Writing page on the tab above this 🙂
I hadn’t fully appreciated how the whole process works, but it seems that people write papers and then Reed Elsevier charges people to read them. It seems my little contribution is worth $31.50 on this link maybe I should start charging for my writing or just get a job as a journalist? I would of course have to improve my spullchuckin 🙂
Happy new year one and all 🙂 it’s 2013 and the human race and the planet have not been Mayaned out of existence so we can once again look into the future. One way is to dive into Kickstarter and crowdfund some interesting projects. If you haven’t looked at this before it is pretty much the same as pre-buying something you are interested in, though you are actually providing the funds to help a project that you think will work or has some merit.
One such project is Elite:Dangerous. Which if you are like me you grew up playing the free roaming ever changing vector based space game Elite. It is due a reboot and David Braben has got a Kickstarter campaign going. With these sorts of things you set a reserve price, the amount of money needed/pledged. If you don’t get the target the project doesn’t get the money.
This one has 61 hours remaining and has £1,186,093 of its £1,250,000 pledged. So there is already over a million pounds sitting wanting to be invested by 20,000 gamers, fans and interested parties. It just needs a little more. When you invest in these things you get various tiers of rewards, access to early beta’s, forums, or in this case your name used in the game 😉
Once you have pledged an interesting dynamic of wanting to be right kicks in. If it doesn’t get funded you haven’t lost anything but if you pick a project you feel part of it. This of course works better in small community ones, once you get up to this sort of level it is really a variation on a corporate machine. However…. This game needs to get funded so if you haven’t then go pledge something for the future. It is going to take a years of development at least with a 2014 release.
Who knows I might even try a kickstarter myself this year. We have some code and a working prototype, finding regular finance for something that is scientifically intriguing, socially intriguing and pushes social media and social games somewhere else is proving tricky, so what the heck. Come to think of it some of the TV shows should get done this way. It’s a good way to prove an audience, hone your message.
UPDATE Well they made the funding and exceeded it 🙂 Making it a record breaker
The wonderful Happy Action Theatre by Double Fine has been rebranded and given away fro free as Kinect Party. It is a very amusing set of Kinect based toys to play with plus some DLC. It sees you and hears in the room and does all sorts of collision detection games like popping a room full of balloons or Augmented reality things that add hats and gadgets to you as you bounce around your living room.
However hidden away are some really clever techniques. The Kinect is scanning the geometry of the room not just tracking the players. This means balloons bounce off your sofa, or pigeons land on tables.
The really clever bit is when as a player you are replaced. Take a look at this picture
The predlets were both in shot, but they get replaced by skeletons. The Skeletons are thinner and have more transparent bits then their human counterparts. The games using the Kinect manages to rebuild the background live behind them, removing them from the scene completely, then adding in a new character. This is not a stick on AR it is much more clever, or at least appears to be.
In another example both predlets were under water. Predlet 2.0 was dead centre of the picture, grabbed a hook and was pulled upwards on the tv screen to be replaced by an approximation of what was behind him. He is there but invisible.
Shortly afterward predlet1.0 did the same thing and voila… gone (but still in the room looking at the screen). Having obscured her Grandad previously behind her in the picture there he is, as if she was invisible.
The screen is done as an underwater scene so there is lots of wobbling of waves that would counter any odd image artefacts, but this is being done on live video on a free application on a console that is nearing the end of its run.
This puts what I wrote in Flush Magazine about the next gen Kinect and how it may deal with removing people from a scene into more context.
(not only is this game/toy clever it is really funny and a great laugh BTW 🙂 )
I hope this puts into context me spamming Facebook with Kinect Party photos, but I loved it when it was Happy Action Theatre and it has got more clever and engaging (and free).
Merry Christmas one and all, I am off to eat some mince pies.
I finally got around to trying GMod (Gary’s Mod). This has been around for a while but is now on Steam for Mac and PC with some interesting new features. A good few years back I remember us sitting in a bunker at Wimbledon in some downtime marvelling at the physics engine demo films for Half Life 2 and the Source Engine. Watching wood catch fire, things rolling around, dropping and floating in a high end game engine. Playing with physics is always fun in code or with a toolkit. At the time there wasn’t a toolkit to build with simply, though along came things like Second Life with some basic physic and lots of multiuser features.
It is a while since I have bothered doing anything with my Desktop windows PC other than get Minecraft going for the predlets. However I ordered a windows based Kinect. Having got the xbox one working on the Mac I wanted to do some development with the official Windows SDK. I only had a Vista machine and it turned out I needed to be windows 7 or above so I took the plunge and upgraded to Windows 8. This was moderately straight forward, apart from having to dismantle the machine to find the serial number on the soundcard to find the right driver. I then got the kinect working, downloaded SDK’s and version of development tools. However it was such a mess getting anything to work I started to lose interest!
How can I use the Kinect? Well a quick google and I found that the Gmod was there on Steam and for £5.99 gave me access to a fantastic building toolkit with all the physics and interesting options of the Source Engine. It also mentioned that it now worked with the Kinect. It also works as a multiplayer network. So this is metaverse territory! 🙂
The palette of things you can rez in Gmod is extensive
Once you rez something like a rusty bath you can spin it around in space with the physics tool.
If you let go it drops to the floor, with a satisfying physics engine bounce and crash.
If you want to put a ragdoll physics scientist in the bath you can do that too.
It is these ragdolls that have an extra context menu on them. If you have a Kinect plugged into your windows machine, and have the right drivers (1.6) then you can then control these ragdoll avatars with your motions. Not only that but you can control more than one at the same time. Lots of scope here for animated dance sequences.
The first time it worked was a magical moment, it blended the initial memories I had of the engine, with the buzz of what we have today to experiment with.
The skeletons of the Source avatars do not seem to be the same resolution as the more detail kinect skeleton so this is not perfect for my Choi Kwang Do attempts to instrument the body, but… they do help try out the kinect. The rest of the things we can build, and that the predlets will build is going to be interesting.
Oh, there is also an amusing “minecraftify” option in Gmod that turns all the textures blocky. So worlds are merging.
I have yet to try using the 360 kinect on the Mac with its version of Gmod but there is certainly a lot of exploring to be done.
Happy Holidays 🙂
Curiosity went live on the smartphones a few days ago. It is a well named free app/game/art installation/research project from Peter Molyneux’s new company 22Cans. The company name refers to the 22 gaming experiments that they intend to create.
I was intrigued as to what the team was going to come up with and how they would strip things right down to gaming basics but apply it to the dynamics of the devices and of social media. Being an entitled an experiment is also good because it does not set the expectation that anything will work or not, merely that it is worth a try.
Curiosity is basically a giant cube, a shared cube, a cube that each client sees in the same way. The cube is covered with millions of smaller pixel like cubes. Each player can zoom into the cube surface and tap a piece of the surface to chip it away. Underneath is another layer of the cube surface. Everyone has to join in to remove an entire multimillion piece surface with their own efforts to collaboratively clear the level and be able to start on the next one. In a giant social networked version of pass the parcel there is a secret final layer at the centre of the cube, hence the title Curiosity. There is to be one and only one winner of the information in the cube, the person to click the last ever surface cubelet on the last layer. Each layer on teh way down is a different texture and coloured picture. The first layer from Day 1 was black and shiny but was soon replaced with a green blobby type of laval lamp picture.
The basic game mechanic of tapping away with your finger tips for ages and ages, emptying areas or randomly dotting around is certainly not the most taxing game idea. It does though have the pure elements of gaming grind that you find in any RPG and in particular in MMO’s. The visuals are of course basic, a textured cube covered in deliberately wonky tiled cubelets, which you only appreciate close in.
The game lets you play completely anonymously with your fellow tappers, or you hook to Facebook for the usual friend spamming and tracking. Each tap produces a score in the shape of coins. These coins can be amassed to use special tools like bombs to remove the surface more quickly. Save up enough and you get a range of chisels.
So all in all this seems almost completely pointless as a game, it does appeal to a certain compulsion to join in, not so much to expect to be the last person discovering the end point but to see why it is worth bothering at all. This is an intriguing introspection on Curiosity, as I am curious as to how the experiment is going and peoples motivations to try it.
This is where it is very interesting as to what 22Cans are going to find out. No doubt lots of people will try it, like any free game and have a little go. They only have to tap one cubelet away, if enough people do that the cube with be cleared. The number of layers of the cube is obviously just data, so 22Cans can do what they want, unlike pass the parcel this is not a static wrapping.
How people try and maximize their scores is interesting too. A bonus multiplier builds up as long as you are tapping and clearing at a regular rate, stop and its reset, miss and its reset. So zooming in with precision to get a large enough view of the cubelets for fingers not to miss and systemically clearing a screen offers one type of reward, zooming out and fast random scattering an area also works. I am assuming that the context and usage of each player is the sort of data being collected.
I have often pointed out that many games companies do not understand true high volume sites, with the exception of the big MMO guys. The focus on 3 player shards in games is not prepared for the massive influx of small packets of information from hundreds of thousands of people at a time. The launch of Curiosity will certain level up some sys admins. The initial experience was one of the servers being all over the place. It’s free and it’s an experiment remember, so no one should be up in arms at this.
22Cans now have some detailed massive scaling experience, assuming they did not have some already. It is also a game that is both social and anonymous, something that is in my particular area of interest. Pseudo anonymity is always intriguing and Social media can be used to provide ready and willing game players so you can work as a team despite not knowing one another well. This is sort of lacking from the cube world. There is a sense that we are, to quote a much mis-used phrase “in this together” but the sense of others is an occasional server refresh and lots of your cube face disappearing as someone else clears it. You can pay a few of your chip coins to look at friends Facebook stats, but that is mostly buried away. Again this is an experiment so I am guessing they are looking too at how many people are bothered at looking at one anthers stats 🙂
You can of course theorise as to the depth of experiment that this actually is. The fact I am bothering to write about it, to have even downloaded it may itself form a small part of some massive data gathering. The cube has the #curiosity hashtag and other text messages floating across its surface so engagement outside of the game is certainly being explored.
I will certainly be following the next 21 experiments with interest. I suspect many of them will start to cross over into some of the work I have been doing and even one of the patents 😉 as Peter and his team look at how social media can be an operating system for gameplay rather than a conduit or portal for games to be delivered into. I may have to offer them some consulting time 🙂
So get tapping, even if it is just to say how pointless it all is. Of course not tapping or bothering will be valuable data too. Yikes I am wrapping myself in a knot!
UPDATE 9/11/12
This video has appeared which is a great, and honest, behind the scenes admission of the size and scale of dealing with massive amounts of users and data with a small team. It is not a grovelling excuse or a press release it is genuine frustration and indicates the hard work and passion of the team. Keep going 22cans 🙂