For reasons that will become apparent in the fullness of time @andypiper was kind enough to bring his @ARDrone over to Feeding Edge HQ again last night. The aim, investigate a good way to pop balloons with it. It sounds straight forward, get something sharp, but maybe not too sharp (for safety reasons), fly at the balloon and BANG!
In reality this was not the case. That’s not to say its impossible, we did in fact pop some balloons, quite a few actually. Generally though it was a fatal for the Drone. The safety systems to stop the rotors slicing though things cut in and the Drone, quite rightly, fails safe and crashes to the ground. There is usually a bit of a reset after this. The rotors tended to be the most effective at popping balloons. Sharp attachments out the front still allowed a good flying experience, more so than the cocktail sticks we places on the outer edges pointing upwards.
What is amazing is how robust the Drone is when it does hit the floor. The outer protection for the blades works very well and generally it lands flat on its head. There is a cushioning system in there to airbag the poor CPU and connections.
Again a huge thanks to @andypiper for risking his device and thank you too for not crashing into the predlets, though our wall calendar was not so lucky 🙂
We had quite a G33kfest. It is not often anyone comes to Feeding Edge with gadgets that we don’t have. Andy brought his Kindle (which I had reviewed on The Cool Stuff Collective ep 12 but don’t own) and the AR Drone (must get me one of those!). I responded showing him my Midi Rockband guitar attached to the Mac and converted him to the power of 3D TV with gaming 🙂
I was recently asked (via twitter initially I should add talking to @GeekDadGamer) if I would be interested in contributing game reviews to the Gamepeople site. Gamepeople is a site where people review games from a personal perspective and with a niche angle. This perspective is different from the more generic reviews. There are reviews from a family perspective, both from parents and from kids perspective, sports and hobbyist specialists and the column I wrote some guest pieces for, the tech gamer column.
By way of an initial piece, and with the purchase of my 3D tv, it made sense to explore the additional features of Gran Turismo 5 in stereoscopic mode.
So here is the resulting article, which makes more sense posted on Gamepeople than here.
It has been a while since I blogged as part of a team and it is good to have a watchful editor to get my wording and grammar tighter.
Following on from that I was challenged to write about the completely different experience that is Double Fine’s Costume Quest on the 360. The technical angle was probably a little less obvious than the 3D of GT5, but the gamecraft and production values, and the back story of having to re-appraise AAA development made for good article fodder.
There should be more to come so I will keep you posted 🙂 You can follow Gamepeople on Twitter and Facebook of course
I have now had to chance to spent a decent amount of time trying out my panasonic 3D 42″ TV. I have been intrigued by the demos of 3d content I have seen and in particular the consequences for gaming, and what prolonged gaming feels like.
There seems to be enough 3d content out there to warrant the upgrade. Xbox, Ps3(games and 3d blu ray), and Sky 3d all having things to offer
The first extended game I had was a few hours on Gran Turismo 5 on the PS3. Initially this was a disorientating effect using the in car view, the layers of HUD information being closer than the car details meant it was a little harder to take in the peripheral information from the HUD. After about 20 minutes though I think my brain adjusted and it became a very compelling experience. If anything it was braking that became more obvious, and some of the turn in points and reference points were more obvious. I have been playing driving games for a long while and have seen others struggle a little with the concept of slowing down for corners as unless you immerse yourself into the experience mentally you can’t feel the forces of the car. The 3d certainly helped with this. Spinning out was also a bizarre experience and it seemed to be quicker to deal with that and have the situational awareness.
Equally wearing the glasses has an odd bubble effect that makes it feel a bit more like it does when you get in a car. After the initial 20 minutes adjustment I felt no more odd finishing the session than playing normally.
Next up was Black Ops on the Xbox 360. Turning the 3d on was a bit more fiddly than in GT5 as the TV did not respond automatically and you have to select side by side 3d, the PS3 switches things automatically. (It seems Sky 3d doesn’t switch either).
The HUD crosshair is a little distracting initially as it breaks the immersion, there is an option to turn this off but that seemed to crash the machine! That aside the experience is brilliant IMHO. I had played through most of the game already but the last few chapters in 3d were amazing. The disorientated running around towards the end with the “numbers” zooming around was a stunning piece, and I am looking forward to going back and trying things like the first vietnam sequence.
However the test was really to play the online training multiplayer with bots and with @asanyfuleno. I found that I felt more in control and aware of my surroundings. I think this may be similar to the driving game mental model. I know that in FPS’s it takes me a while to feel the levels. I felt instantly connected to the environment and whilst this would still not equate to pwnage online I felt the 3d levelled me up, at least in situational awareness.
The view down iron sights and cross hairs is also remarkable.
As with the driving the glasses did not get in the way, but seemed to place you somewhere other than the room you are actually in. This was not something I was expecting to happen, but thinking about it it makes sense.
The third game was Tumble on the PS3 with Move. I have said before how fantastic Tumble is with the tactile feedback and I have to say it is very much enhanced by the feeling of space crated with 3d. The need to use the shadows of the blocks as reference points just melts away.
So a nights gaming on 3d was enhanced and from my point of view will only get better as we get more used to it and designers take advantage of it. I think being an old school gamer I will still play 2d games and with a 4 year old in the house (where the advice is to not use 3d for under 7’s) means we can’t do all 3d gaming yet. However as many of the games I end up playing are 18 rated that is not such a big issue.
There was an additional serendipitous happening when I purchased the TV from Best Buy, whilst waiting for the box to come out of the stock room I got talking to the head of the 3d TV section, who also happens to run a startup creating 3d displays(more of that in a later post). It was fascinating to be talking to someone who knew a lot about the subject. Its one of the reasons I like Best Buy, no pressure or hassle salespeople and when you do talk they know their stuff. This was by contrast with my experience when I popped into Comet to get an extra audio optical cable, the salesman was pleasant enough but when he asked if he could help me I was not expecting to have to explain to him what an optical audio cable was for. He also made the usual “pay more for cables as the signal is better” type of comment which for many digital things simply isn’t true anymore.
So…. shop in Best Buy!
This week Dave Taylor/Davee Commerce and Robin Winter had a special on Treet.tv about lots of the virtual world projects in Second Life that Imperial College London have been up to. It is a great show to watch to see the variety of ways Dave has got Second Life working from public information, targeted patient experiments and doctor training.
The doctor training and evaluation that appears around about 32 mins in Dave says. “This is where we have our virtual patients, and these patients are controlled by software actually outside of Second Life. That software has a knowledge of the patients physiology and condition.” He also explains there are 3 wards and 3 patients in each giving 9 levels of difficulty in scenario.
“We are using this to research how we can asses trainee doctors at different levels of training”. “We have tested about 60 doctors so far on this”.
I am glad this is out in the public as this has been part of the work I have been doing in SL. I can’t explain exactly what does what as its a private project but as Dave points out the patients and the interactions are controlled from outside of Second Life, my part in SL is the broker talking to that external model. I also ended up building the dynamic menus and handlers in world. The menu’s are based on the data coming back, and align to the correct place in world so they are designer friendly. This was built before the web on a prim existed, and we aimed to do everything in world. As you know handling text can be a problem in SL and variants of Fasttext and xy text came to rescue. Though rezzing a dynamic button and making it know what it is supposed to do is a non trivial task. This was also before HTTP in world servers were stable so SL is the controller asking the external software what to do next.
It has been a fascinating project, as has its follow on ones that have increased in complexity and in interactions. Making SL a component in a system not the sole piece of the project makes for a greater richness and flexibility. After all SL is not a database/data handling application.
What is great is that Robin, who is one of SL’s foremost designers (along with his other half) and has been for years(he built the original Dublin sim), is able to craft animations and objects and then trigger them into existence using our message protocol, after the external software model tells my broker code that its got some changes to display.
There are a few of us pushing the bondaries of data interchange with SL and also with opensim and other virtual worlds. I hope this helps people understand that we can do very complex integrated tasks using the best of a Virtual World and the best of a traditional server application. Integration is the key.
The wires are full today of the results of some work by Oliver Krelos and others taking the only just released Kinect and applying it to places other than the Xbox 360 set of games. This is all very opensource and hacky, but it shows what can be done in a very short time when talented people, not always in the original company, have access to the kit and freedom to operate.
Much of the code is starting to be released (though a dependent piece of code if not yet) also as it is GPL licensed I am sure many corporate colleagues will be warned off, however its here
I have been having a look at the beta of Unity3d 3.0 for a while, but nothing is as good as it going gold and live. It comes packaged with a wonderful looking demo called Bootcamp. There is a version of the demo (with a bit of an in game cutscene before getting to the 3rd person part.
The simplicity both of getting Unity3d running, i.e. its nicely self contained means you can just dive in and make things. There are some new scripts to help like a 3rd person script and a new demo character with animations of a construction worker. It really could not simpler, yet there are tonnes of features for the more pro focussed game programmer and graphic artist.
The demo uses a very large terrain with lots of detail and debris. The player character is using a locomotion system to animate over and around the obstacles. Physics is in full effect if you enter the derelict building and start shooting at windows and cans. Things deform, break and fly around.
Its all just sitting there on your hard drive ready to explore and see how its all created.
There are some handy hooks into MonoDevelop to allow editing of code, breakpoints and inspection as we have got used to on other development platforms. In true Unity style it just works.
If you have not already downloaded the free version, and you are in anyway a techie or designer go and get it now !
It really should be something in every school IT lesson too. The ability to make things happen with real programming behind it will make more kids get to understand programming and the sciences behind that.
I really wish we had had this when I was starting out, so now I want everyone to go have a go look.
I have to admit to having been a little dismissive of the Ps3 Move controllers. I should have known better really as if I get that dismissive feeling I usually dig a bit deeper to consider why and find the gem I am missing.
In this case it took getting a Move controller in the house and playing with it. I had only really seen the table tennis game we had on The Cool Stuff Collective show, which whilst it is great seemed very Wii like in what it did. Yes you have brilliant control over the bat but conceptually it is the same.
Anyway on Saturday the predlets and I wandered into our local Best Buy and I thought we should get 2 controllers.
Already having a camera and a few things like eye pet that could be upgraded it was worth it.
You have to charge the things up of course, but as you have to wait several hours for updates and demos to down load there was time for a charge.
The ball on the end of the controller changes colour depending on which controller it is in a multiple set. In the shop I was confused as to whether I needed the extra additional movement controller for anything. So I left that one where it was (more on that at the end of this)
Whilst mentally I have been all about Kinect on the 360 and the fantastic full body motion capture it does for control the Move controller really does some things that the Wii mote cannot do and to some extent that Kinect cannot do.
The Wiimote acts more as a 2d pointer, it does not deal with depth in the same way as a Move does. In addition the move is keyed into the camera. So the experiences are able to put you in the environment and then it is able to track the controller and replace it on screen with something else. See below with the foam hand.
This feedback loop is most noticeable in Eye Pet. The previous incarnation used a tracking marker of card that you moved around the augmented environment. It worked, but it you turned the card at an angle you would loose tracking (as the marker was 2d an obscured). The glowing move ball and its accelerometers know if you twist turn or angle the controller any direction and you can see that on screen with the augmented item. This makes eye pet even easier to use. It is more obvious to hold the glowing ball than a flat piece of card. The predlets got straight on with eye pet and were able to do even more and play the games better, so Move is worth it for Eye Pet alone.
This direct connection to the onscreen environment and the blended augmentation of the magic mirror effect does not feature in all games though.
However one feature that felt great is in the downloadable game Tumble. This is a block stacking game, a clone of Boom Blox on the Wii which allows for a good comparison of the evolution.
Tumble shows the Move as a laser pointer on the screen, but when you pick up a block you have to reach in and out of the screen as well as move in the 2d plane to stack the block. As you place the block you feel the very subtle feedback of the rumble pack. It is haptics lite but with the visuals and the physical activity the feedback loop is very believable. It is this feedback that Kinect will not be able to do. Obviously in a dance game on Kinect the feedback and avatar on screen movement will immerse but you will not feel anything from other experiences. In addition this Tumble game is 3d enabled (which I we are not yet!) but I ca imagine the 3d depth combined with the physical movements and tactile feedback will make this incredibly immersive.
Another tactile element to Move is the fire button in a “light gun” game. In some of the point and shoot games a trigger to press is very satisfying.
So you can see there is more to the comparison of Move and Kinect and Wiimote than meets the eye or hand.
Finally I got Resident Evil 5 as a Move compatible version. I don’t really like zombie games and whilst I appreciate the wonderful nature of Res Evil it always seems the controls CapCom use are counter intuitive. With the Move this was the same. The trigger is user to aim the gun and the thumb button to fire. In addition whilst this says Move compatible the boxes need to be a bit more clear whether you need 2 glowing ball controllers or 1 glow and one movement. Res Evil needs a joypad to move around. You can hold a six axis controller in your left hand and grapple with movement but really the smooth way is the extra extra controller. Not clear from the box (though obvious in some ways of you know the game). As Res Evil is quite a tricky game dealing with odd controls and not just shooting stuff meant it was not a great experience out of the box. I will persevere though.
It is a pity there was not a better line up of games at launch. Something that seems to be a very Sony thing to do. Lots of coming soon 2gb demos to wait to download (or on a disc if you buy the camera with the controller).
However, I like it now, it has a place and it will do some very interesting things IMHO.
After recording the Cool Stuff Collective second episode using the Haptic 3d device it made sense to get one and see what the predlets, and maybe their school, could do with one. Whilst the haptic show does not air until 20th September I thought it was worth sharing some initial impressions of ownership. Basically it awesome !
The entire package is called Chameleon 3d from A1 Technologies. A1 Technologies are the UK distributor of both the Novint Falcon (The haptic device) and the Anarkik3d Cloud 9 modelling package
The Falcon has a self contained set of mini games and tutorial demos. Although windows drivers tend to be a bit of a pain if you follow the right instructions it dos just work first time. The initial demo lets you feel a number of textures and forces rough, smooth, magnetic, rubber, honey etc. It makes for a very impressive feel for the potential.
Whilst I did spark up the haptic patch for Half Life 2 I was more interested in the creative modelling of Cloud 9 for the moment (more on virtual world and game navigation in another post)
Here is the Falcon nestled with some much older tech gadgets.
The Falcon is quite a large device but sits on a nice sturdy base. It needs some weight to be able to offer resistance and not zoom across the desk. The footprint is about the same as the old microsoft sidewinder force feedback sticks I used to have for Combat Flight Sim.
On Cloud9 I dived straight in and in time honoured tradition “it all started with a cube”.
I pushed, prodded and deformed the cube both from the outside and from the inside too.
The video shows the resulting modified cube, then it published to Unity3d as an OBJ file. Finally it shows predlet 1.0 having a go. It really is very intuitive and very compelling to model with. I have a few ideas of things I want to try but like any modelling tool that also needs some inspiration and talent. As the creation can be exported as OBJ (or STL if you want to 3d print them) then can of course be put into other 3d packages to have extra effects applied.
Now when are Second Life going to sort out meshes 😉 I can generate some very organic structures then even as a techie.
If you want to see the actual unity3d spin around it is here I published it from the free windows version as my Mac paid for version is on the version 3 beta and so cant publish to the web yet.
BTW it was using this that predlet 1.0 uttered the immortal words “Dad, I love you job its way cool” 🙂
People sometimes moan that the user contributed web is full of rubbish, but whilst that may be true it does mean that talent shines out more. I have been enjoying the occasional watch of Mystery Guitar Man’s work on youtube.
It is deceptively difficult to create edits like this. I love the use of annotations at the end too.
Many of us have the equipment and software to create great video and music now. So the fact there is an open and free distribution channel (i.e. the web not just YouTube) is fantastic.
If you want to, and if you have the talent you can just get on with it 🙂
On my birthday trip out we all popped along to the National Motor Museum at Beaulieu. It now has a special exhibit which is the contains many of the results of the Top Gear challenges.
I find Top Gear both hilarious and inspirational, and being a moderate petrol head its also a good subject matter.
Seeing the fruits of the production teams labour shows some brilliant, but deliberately flawed ideas.
Everything tends to “not go quite as we expected”. Which is of course the sort of thing you would expect like bolting the two fronts of cars together to make it easier to get away in reverse as above.
Its all brilliantly visual, but any techs out there know that this sort of mad idea and bolting together of things that shouldn’t be bolted together goes on all the time in software, you just can’t see it.
Many a time I have seen a project strap the software equivalent of a Reliant Robin to a large rocket on a tight budget in order to emulate the space shuttle.
Of course the spirit of engineering is to know when to smash things with a hammer and when added strength and design. Top Gear deliberately makes sure they tip the wrong side or working for comedy effect.
The cascade of constant changes and challenges their set piece projects face also mirror what we see all the time. Make something for one purpose and then a hand delivers a new envelope which is sort of what it was meant to do but requires a bit more hacking and bodging on top.
I shouldn’t delve too deeply in the clever approach they have developed, it ruins the magic and the humour. More snaps of some of the pieces here
There is of course a production team hard at work for these guys, whilst they generally shun the sort of tech I deal with on a daily basis they are in fact geeks cut from the same cloth as the rest of us.
In order to hack and bodge things you do actually have to know how to to things properly in the first place, its kind of like Jazz, you need to be a skilled musician to improvise.