In a break from the all day video calls and events I just quickly downloaded Qlone to my Iphone. I also printed out the black and white scanning reference map. This technique of a reference mat and moving the camera has been around for decades to varying degrees of success, so I was not sure how it would fair.
In about 1 minute of waving the phone around to fill the visual indicators on a virtual dome over this character I had a scan of him. A full colour 3D model in the app. I was so impressed with the initial quality I paid for the upgrade to it which did this remarkable thing.
Bear in mind the figure is just a bit of plastic, I did not highlight any joints or add any context to the model, but magically it just was able to walk around and animate itself instantly.
It can export to a multitude of 3D formats so I think I will have to get Unity back up and running again. Very impressed, well done Qlone
For many years I have seen the CES show appear in magazines, then TV and then of course all over social media. As a long time tech geek and early adopter I have always wanted to attend, but never been able to. In my corporate days getting approval for a train to London was a chore. As a startup I never had the time or money either, preferring to invest in the gadgets like the Oculus rift or paying for a Unity license so I could build things. On the TV show we talked about CES, and if we had gone to a 4th series it was on the cards.
This year, with my industry analyst role in IoT I was able to go. Of course as a work trip it was a bit different to just being able to take the show in.
I had briefing after briefing with a bit of travel time in between for the 2 main days I was there. Once thing that is not always obvious is just how big the show is. Firstly there is the Vegas convention centre with North and South Halls that is bigger than most airports. It was so big that I only got to really visit the south halls, the north hall of cars, a motor show in its own right alluded me. All the first days meeting were around the south halls. Day 2 was at the other areas of the show, the hotels have their own convention centres and also floors and suites get rented out. The Venetian Sands, Bellagio and Aria all had lots going on, each as big as any UK show it seemed.
Walking around 9-10 miles a day, still not seeing everything, at a trade show gives you an indication of the size.
Again I pretty much missed most of the expo floor with meetings but the day I felt out I had an hour to pop back to the Sands main hall and see some things.
The split across the entire show of giant corporate powerhouses to tiny startups with a single table was amazing. I had assumed it was all the former, but the latter is heavily supported and with kickstarters and maker culture now mainstream it will continue to be really important.
One thing I was there to see was how much Augmented Reality was taking off running parallel with the VR wave, there were a lot of glasses and of course the Hololens and the industrial focussed Daqri smart helmet. Still not there as a consumer focus really yet, though the Asus Zenfone AR powered by Tango and Qualcomm was announced but not on sale until later in the year (no date given) which may put true AR into people’s hands.
DAQRI Smart Helmet
It was CES’s 50th anniversary, and that was fitting given I turn 50 this year too. It may not be the most exiting bucket list tick but I have already done lots of mine, and need to refresh the list anyway. I am not sure that will include riding in this human size quad copter that you fly with a smart phone though!
I guess we best experience everything before these guys and their bretheren take over.
Still at least we can 3d print new parts for ourselves
As you will see in this album the whole place just becomes a blur of everything looking the same, lights, sound, people, attract loops etc. All very fitting to be in Vegas.
So that made a whirlwind start to this year. This time last year I was published Cont3xt and wondering what the next steps were going to be. This year I have stacks of IoT research and writing work to get on with, a 50th birthday to not get worried about, imminent wisdom tooth removal (yuk) and all being well a 2nd Degree/Il Dan black belt text in Choi Kwang Do. So onwards and upwards. Pil Seung!
Tomorrow I am heading to predlet 1.0 secondary school to talk to the year eight class (including predict 1.0) about programming, tech and interesting stuff like Internet of Things and what it is relevant to them. I am putting most of the same, but updated things in there to garner interest. The extra labelling of IoT to make things more relevant doesn’t alter the vector the the future I discuss is heading towards.
It is also a talk about how my career, or whatever this is called has flowed alongside new developments and of course I do also mention the book. It is relevant in several ways. The subject matter and being able to share and explain how a programmer’s mind or a gamer’s mind works is the obvious one. The ongoing story keeping up with and accelerating past the current technology, yet keeping it just possible is pretty much where I live as a technical evangelist. You have to know how things work, you have to be able to build things in order to help push things forward. Also the feedback loop of 3d printing creating objects that become inputs for IoT is still an incredible one to consider and share.
I also found that it became a relevant IoT(ish) example.
The book started as an idea, which became a digital product. Designed, published and sold from anywhere in the World. However, it became a physical object too. I was not planning that originally. The printed object takes up space and has a presence. A pile of them become things, not internet of things but things generated from the internet. They and the intangible ebook generate digital sales figures. Data that tuned for just me and my sales becomes the intelligence I act upon for the follow up book, for digital adverts etc. It also is part of the global sales figures, the charts the overall big picture from the big data on what is being read and sold. Products always have had this of course, but the figures are now instant. The ebook is able to be borrowed in a digital library and I get figures of how many pages have been read. The printed book can’t do that, yet. However, it could with the right devices as part of it. That would be useful for me, have people read it, where are they stopping, thinking time, heart rate for excitement, stress for complexity etc. The ebook is part way there, the physical is as far away as ever from that. The question then becomes, if we related that use case to something else would it be desirable and make sense to push the instrumentation further in the product.
Another gadget I may take with me is the Emotiv headset to show them. Oh! Look! it measures brain engagement, relaxation, stress etc. for the current activity. If it just knew which book, film, game I was reading, watching, playing then it could quite happily correlate and send this brain wave data to Amazon, on to the individual author, regardless of ebook or physical version. It could then also aggregate that all up into the big data picture.
I generated a word cloud of the first chapter to. Everyone knows what a word cloud is now I think, but it can be a relevant example of making sense or trying to make sense of massive amounts of data.
The instrumentation loop is here, the parts of it may not flow totally without friction but we can experiment with what might work, prototype then maybe build the proper versions.
I am also going to go a bit fractal thinking on them. Reminding them everything in computing is just adding up two numbers. The half adder circuit of an Xor and an And gate is the basis of it all. The number of times you it add up is what the computer leverages. From that spawns great complexity, and also great simplicity.
Then it’s Minecraft and python demo. Virtual world, code mixed maybe with a hololens video on the end, of Minecraft of course.
There is always something going on in science and emerging technology. However some weeks just bring a bumper bundle of interesting things all at once. Here in the UK the biggest event had to been the near total eclipse of the Sun. We had some great coverage on the the TV with Stargazing live sending a plane up over the Faroe islands to capture the total eclipse. I was all armed and ready with a homemade pinhole camera.
This turned out great but unfortunately we were quite overcast here so it was of little use as a camera. I also spent the eclipse at Predlet 2.0 celebration assembly. They had the eclipse on the big screen for all the primary school kids to see. Whilst we had the lights off in the hall it did not get totally dark, but it did get a bit chilly. It was great that the school keyed into this major event that demonstrates the motion of the planets. So rather like the last one in 1999 I can certainly say I will remember where I was and what we were doing.(a conversation I had with @asanyfuleno on Twitter and Facebook)
This brings me on to our technological change and the trajectory we are on. In 1999 I was in IBM Hursley with my fellow Interactive Media Centre crew. A mix of designers, producers and techies and no suits. It was still the early days of the web and we were building all sorts of things for all sorts of clients. In particular during that eclipse it was some more work for Vauxhall cars. We downed tools briefly to look out across Hursley park to see the dusk settle in and flocks of birds head to roost thinking it was night.
It does not seem that long ago but… it is 16 years. When we were building those quite advanced websites Amazon was just starting, Flickr was 6 years away, Twitter about 7 years away, Facebook a mere 5 (but with a long lead time) and we were only on Grand Theft Auto II, still a top down pac man clone. We were connected to lots of our colleague son instant messaging but general communications were phone and SMS and of course email. So we were not tweeting and sharing pictures, or now as people do live feeds on Meerkat. Many people were not internet banking, trust in communications and computers was not high. We were pre dot.com boom/bust too. Not to mention no one really had much internet access out and about or at home. Certainly no wi-fi routers! We were all enthralled by the still excellent Matrix movie. The phone in that, the slide down communicator style Nokia being one of the iconic images of the decade.
NB. As I posted this I saw this wonderful lego remake of the lobby scene so just had to add it in this post 🙂
It was a wild time of innovation and one many of us remember fondly I think. People tended to leave us alone as we brought in money doing things no managers or career vultures knew to jump on. So that eclipse reminds me of a time I set on a path of trying to be in that zone all the time. I was back then getting my first samples from a company that made 3d printers as I was amazed at the principle, and I was pondering what we could do with designers that knew 3d and this emerging tech. We were also busy playing Quake and Unreal in shared virtual worlds across the LAN in our downtime so I was already forming my thoughts on our connection to one another through these environments. Having experiences that I still share today in a newer hi tech world where patterns are repeating themselves, but better and faster.
That leads me to another movie reference and in the spirit of staying in this zone. This footage of a new type of Terminator T-1000 style 3d manufacturing. 3D printers may not be mainstream as such but many more people get the concept of additive manufacture. Laying down layer after layer of material such as plastic. It is the same as we made coil clay pots out of snakes of rolled clay when we were at school. A newer form of 3D printing went a little viral on the inter webs this week from carbon3d.com. This exciting development pulls an object out of a resin. It is really the same layering principle but done in a much more sophisticated way. CLIP (Continuous Liquid Interface Production) balances exposing the resin to molecules to oxygen or to UV light. Oxygen keeps it as a liquid (hence left behind) and targeted UV light causes the resin to become solid, polymerization. Similar liquid based processes use lasers to fire into a resin. This one though slowly draws the object out of the resin. Giving it a slightly more ethereal or scifi look to it. It is also very quick in comparison to other methods. Whilst this video is going faster than actual speed it is still a matter of minutes rather than hours to create objects.
Another video doing the round that shows some interesting future developments is one from Google funded Magic Leap. This is a blended reality/augmented reality company. We already have Microsoft moving into the space with Hololens. Much of Magic Leap’s announcements have not been as clearly defined as one might hope. There is some magic coming and it is a leap. Microsoft of course had a great pre-release of Hololens, some impressive video but some equally impressive testimonials and articles from journalist and bloggers who got to experience the alpha kit. The video appeared to be a mock up but fairly believable.
Magic Leap were set to do a TED talk but apparently pulled out at the last minute and this video appeared instead.
It got a lot of people excited, which is the point, but it seems even more of a mock up video than any of the others. It is very ell done as the Lord of the Rings FX company Weta Workshop have a joint credit. The technology is clearly coming. I don’t think we are there yet in understanding and getting the sort of precise registration and overlays. We will, and one day it may look like this video. Of course it’s not just the tech but the design that has to keep up. If you are designing a game that has aliens coming out of the ceiling it will have a lot less impact if you try and play outside or in an atrium with a massive vaulted ceiling. The game has to understand not just where you are and what the physical space is like but how to use that space. Think about an blended reality board game, or an actual board game for that matter. The physical objects to play Risk, Monopoly etc require a large flat surface. Usually a table. You clear the table of obstructions and set up and play. Now a project board game could be done on any surface, Monopoly on the wall. It could even remove or project over things hung on the wall, obscure lights etc. It is relying on a degree of focus in one place. A fast moving shooting game where you walk around or look around will be reading the environment but the game design has to adjust what it throws at you to make it continue to make sense. We already have AR games looking for ghosts and creatures that just float around. They are interesting but not engaging enough. Full VR doesn’t have this problem as it replaces the entire world with a new view. Even in that there are lots of unanswered questions of design, how stories are told, cut scenes, attracting attention, user interfaces, reducing motion sickness etc. Blending with a physical world, where that world could be anywhere or anything is going to take a lot more early adopter suffering and a number of false starts and dead ends. It can of course combine with rapid 3d printing, creating new things in the real world that fit with the game or AR/BR experience. Yes thats more complexity, more things to try and figure out. It is why it is such a rich and vibrant subject.
Just bringing it back a little bit to another development this week. The latest in the Battlefield gaming franchise Battlefield Hardline went live. This, in case you don’t do games, is a 3d first person shooter. Previous games have been military, this one is cops and robbers in a modern Miami Vice tv style. One of the features of Battlefield is the massive online combat. It features large spaces and it makes you feel like a small spec in the map. Other shooters are more close in like Call of Duty. The large expanse means Battlefield can focus on things like vehicles. Flying helicopters and driving cars. Not just you though, you can be a pilot and deliver your colleagues to the drop zone whilst you gunner gives cover.
This new game has a great online multiplayer mode called hotwire that apps into vehicles really well. Usually game modes are capture the flag or holding a specify fixed point to win the game. In hotwire you grab a car/lorry etc and try and keep that safe. It means that you have to do some mad game driving weaving and dodging. It also means that you compatriots get to hand out of the windows of the car trying to shoot back at the bad guys. It is very funny and entertaining.
What also struck me was the 1 player game called “episodes”. This deliberately sticks with a TV cop show format as you play through the levels. After a level has finished the how you did page looks like Netflix with a next episode starts in 20 seconds down in the bottom right. If you quite a level before heading to the main menu it does a “next time in Battlefield Hardline” mini montage of the next episode. As the first cut scenes player I got a Miami Vice vibe which the main character then hit back by referencing it. It was great timing, and in joke, but one for us of a certain age where Miami Vice was the show to watch. Fantastic stuff.
I really like its style. It also has a logo builder on the website so in keeping with what I always do I built a version of the Feeding Edge logo in a Hardline style.
I may not be great at the game, as I bounce around looking for new experiences in games, but I do like a good bit of customisation to explore.
I think that along with the “that’s always the way we have done things” line that gets thrown about whenever anything new challenges the status quo, the live “but we tried that before and it did not work” is equally dangerous.
I do partially agree that if you keep doing the same thing and get the same outcome you should try something else. The adjustment to something else may be to revisit and gently tweak the approach. If you are through dice and trying to get a double six and failing, you just have to throw them in a different way until you get to the right result to win the game. (Of course chaos theory and dependence on initial start conditions in a feedback loop means you always are with every throw but that’s another story).
At the weekend we visited the Milestones Museum in Basingstoke. It is just around the corner but we had not been. It is a giant hangar with a huge recreation of 1900’s and 1930’s streets and then some 50’s,60’s,60’s and 80’s artefacts too.
One thing that stuck me in the camera shop exhibit was this.
It is two lenses looking at two similar photos that gives a great stereoscopic effect. Now 3d photography and the concept of a picture for each eye has been around for a very very long time. This is a plastic construction probably from the 60’s. As kids I know lots of us had the iconic View-master to view 3d content. I distinctly remember the Superman 12 page comic that I had. I can still picture the images in my head.
These devices let us see worlds, to engage with ideas albeit with an extra dollop of imagination.
These toys and tricks went out of favour and we headed into the digital age. We had a bit of a revisit of 3D with Tv’s and movies but they miss one crucial element. The old still 3d like View-master was like hint of reality, a story, a reading of a book. Films instead were a large amount of information, story and engagement. They were locked into the screen you were focussed on. It is an art form based on a framed window.
Video games started, and continue to occupy that framed space but in amongst the platformers and the puzzle games we have found 3d virtual worlds. Spaces that we decide how to navigate through, from gun toting first person shooters to free roaming driving simulators. Each of theses tends to have the activity, shooting, driving etc as one aspect to control and the camera or view as another. Some of these worlds thrive on users coming together in these worlds and building or working together. They connect us as humans, proxies by a digital environment over great distances. It is why Microsoft are looking to buy Mojang, the creators of Minecraft for $2 billion, the virtual world is of value, the interaction is of value.
Now, we did have a virtual reality revolution back in the 80’s and one before that, headsets feeding our eyes two distinct images of environments, once that we could control by turning out head and looking. What we did not have then is the lightweight high end screens (as we see in smartphones), heavily commercialised location and orientation based technology (again as we see in smart phones) and a population who understands and wants to navigate in 3d space and metaverses (most kids will have played Minecraft or seen Minecraft and wanted to play or similar). So it makes a great deal of sense for us to now have the rise of the headset and of Virtual Reality to blend into these other advances of technology. The Oculus Rift, Morpheus et al.
Yes they get in the way of interacting with the physical world, but they give huge advantages in the virtual world adding to the sense of presence.
So if people say look we did 3d or look we did VR and it didn’t work last time then they are missing the fact that we are not just rolling the dice differently. We have a new set of dice, this with a design, conceived and created in a virtual world and printed to perfection. We may not roll a double six straight away but when we do we win a much bigger game.
We may look on this period in time as a clunky use of tech, but we may also loo on this with a curious fondness and a charm that leads us on to better things.
This example being the laughing sailor, you put your money in and it laughs, puppetry and animatronics at a very basic level but something endearing about it?
Thankyou to @marleyman007 for pinging me this link on twitter it. It made me feel really happy to two reasons.
The first is very obvious. 3D printing making a positive impact, but in a fun and unusual way on peoples lives.
(Photo By Aaron Brown creator of the print)
Based on idea from an organisation called e-NABLE where a network of people interested in 3d printing and in making a difference are designing and 3d printing prosthetic hands for kids, he created this super hero hand based on Wolverine. Now that has to make you get a little buzz of excitement and joy and will surely help people see the 3d printing revolution and suddenly realise its potential.
The other reason this made me smile and somewhat energised is the loop that has now come full circle. As you know I like to share ideas. A few years back in 2010 my friend Scotty aka @starbase37 set up a meeting with John Marley (@marleyman007) and I to talk about tech. John was starting with a pitch for the TV show The Cool Stuff Collective. Scotty had pinter him to my blog as a way of finding out new stuff. That meeting led to John asking me to not just help research but to present on The Cool Stuff Collective as Super g33k 🙂 The one the first two shows we recorded featured 3d printing.
It was September 4th 2010 that I wrote about that. (Is it really 4 years ago!). Now this blog post features something that John has sent to me over Twitter :).
We revisited 3d printing in series 3 too with the reprap. I am very proud we were able to show 7-12 year olds 3d printing on the TV 4 years ago. Image that! some of them are now approaching A-Levels.
The last thing I said on the last show in a round up was that I really hoped that we would get technology properly taught in schools, to allow kids to be able to make and create with tech not just use. That is certainly heading the right direction despite all the arguments about whether coding is coding for codings sake etc. The BBC have new assets and a plan to involved computing more in various programmes including Nina and the Nuerons which is a great place for it to be represented on Cbeebies.
Meanwhile in the past 4 years there has been a massive change in how kids engage with tv and media content. Many of them are self broadcasting things such as gameplay on Twitch.tv or spending a lot of time watching minecraft videos from Stampylonghead
He and his fellow casters have created a style of commentary on games that is about fun and exploration. It has certainly led to the predlets spending a lot of time talking and explaining what they are doing whilst they play, usually with a silly voice and lots of screams and whoops.
I do get asked why there are no more Cool Stuff Collective shows. It is something that would be great to do again. There is more new tech than ever that is accessible to young and old alike. There is room to find things that can be shared, but that can also be used in school and at home. Whether the regular TV format for kids tv is suffering a demise, or if it is just morphing into a more shared online experience is hard to tell. I have an image in my head of a show that truly crosses the borders. I recently added this comment to a thread on Facebook about the coding or not argument and the relevance of TV
” Imagine the show being a catchup on a massive open source application gathering. Live hackathons with audience participation. Bringing everyone into the “project” designers, writers, coders , testers, art and science mixed. Live touch points across the web, assets to share and update. Voting features and project aims up and down.
Take all that stuff tv does for no reason like xfactor votes and direct it to mass creation. It does not just have to focus on code. It could be awesome.”
There is certainly content to be shared, in a different way to the excellent Gadget Show or other Gadget/science programmes. A merging of live and online interaction, even thread of storyline created through synchronous and asynchronous audience participation.
So whilst I get teary eyed (in a good way) looking back at Cool Stuff Collective, where I had some of the best times ever and most fun “working”, I am still very proud of what we covered but I know we (or someone) could cover the new things in a new way. The thing with kids TV is that you have an impact on a cohort and then they move on. If we can reach this current batch of 7-11 year olds and support and inspire them to understand and build with technology then we have a great future ahead of us. For now I will try and catch things here and also longer explanations of the world of tech and evolutions in Flush Magazine and occasionally look at the old footage in the showreel and smile 🙂
…. particularly when you you can make a sword with the pen!
This week I had a small broadband outage when the cables got cut during some engineering work down the road. I took the opportunity to escape the current project and popped to Maplin to stock up on some bits and pieces.
Whilst there I saw they had the 3dDoodler pen in stock. I had very nearly Kickstarter backed this one but missed the deadline. So I thought it was high time to get get one and try.
It is the same principle as a 3d printer, but instead of stepper motors it is controlled by hand.
The pen has a very hot tip, through that is pushed a thing plastic rod that melts . As the plastic melts it is soft and can be piped around and then hardens very quickly.
My first attempts with it we note great.
However I checked out the gallery and saw that many of the builds were in fact done by making the various component faces then using the doodler to stuck them together.
Rather than make a giant structure I thought I would try the company logo. Feeding Edge as a logo has existing in many forms form its early draft on a in game skateboard tshirt to virtual world instantiations and decals on cars. So a 3d doodle of the logo was worth try.
It works really well to print out a stencil and draw around it, which is what I did, the paper does not burn, it cools the plastic and the model does not stick.
I know it’s July and this looks like a christmas tree ornament but it was fun to do.
I also tried a Dodecahedron but I think I lost the a piece, however it is a 3d structure.
So I would say it does what it says. It lets you doodle in space. The 2 speed settings of ejecting the plastic and the freedom of movement give this a very craft maker feel. Working out how and where to solder joins of elements or how to break something down into flatter faces is an interesting thought process.
The predlets asked if I could do the Choi Kwang Do logo. That may take a bit of work but may give it a go. Though I thing it then needs to be sealed in a casing afterwards.
The smell of burning plastic was probably not the ideal odour for out new kitchen but us early adopters have to suffer for our art 🙂
In a twist of serendipity I was invited to the twitter UK offices by @andypiper when I was in London yesterday. Amongst their cool installations there was this neon sign.
The sentiment is one I agree with and many more firms should aim for this. However it did get me thinking I wonder if there is glow in the dark meltable ABC plastic I can out in the 3d doodler. Rather than the stained glass leading window effect of my logo I could have a neon style glow in the dark one 🙂
Modelling in 3d from scratch is hard. It is digital sculpture. Whilst the tools help it is really for the hands of an artists as much as a hammer, chisel and block of marble is to create something from nothing. There are qll sorts of wizards and helper programs plus lots of existing artwork in the public domain that let us less artistic more techie people get some things done. I don’t think you can ever replace the talent and eye of an artist but some technology is going to help as a starting point for 3d objects.
I have been looking at Kinect Fusion which uses the kinect to scan and then produce a 3d model of the things it is looking at. This looks like a very promising way to get existing objects into a good format for dropping into virtual worlds and Unity3d.
You will notice from this sample picture from Microsoft that the mesh created can be an OBJ as in normal 3d packages or an STL the format for 3d printers so this is nearly a 3D photocopier.
I say I have been looking at it, though I have not been able to use it fully yet. The sample is a Windows only application. Whilst the kinect works fine as a “normal” kinect on the Mac the scanning application isn’t supported (for obvious reasons). I run windows on a bootcamp on the 5 year old Macbook pro. Unfortunately that does not have a full DX11 graphics card. Without all the bells and whistles of DX11 the application doesn’t work. So I may have to get a full windows machine as long as I can be sure its full DX11 compliant.
There is another interesting development in the works that @asanyfuleno pinged me way before I saw it explode across my twitter feed. It is the ability to create a 3d object from three simple swipes across a 2d picture.
It sounds crazy but it does seem to work in this video. Which is also honest enough to show cases where this does not work.
This is an academic presentation from SIGRAPH Asia 2013. There are always interesting things to read about from the various SIGRAPH events such as these
Things at SIGRAPH and other high end tech conferences can sometimes seem out of the reach of most of us however (just to bring this full circle) This paper from 2002 just 11 years ago shows a 3d scanning application which performs the same task as the Kinect Fusion application I started this post with.
It was also good too see recently from SIGRAPH the wonderful MCOR paper printer now doing full colour. We had some samples from MCOR on Cool Stuff Collective and I wrote them a little article. Now though they are doing even more clever things with paper and colour.
It is great seeing the advances in tech and art and even more exciting as they mature into accessible tools and toys for anyone to use.
I was going to write another reminiscing post about Wimbledon and the fantastic teamwork and effort that goes into delivering the tech there but I though I would be repeating myself.
However this year I can’t not mention Wimbledon for several reasons. The first is of course Andy Murray winning the whole Championship. The serendipity of it being 77 years since a brit won, combined with it being 7/7 that’s a lot of lucky numbers in a line 🙂
The predlets had got very into the tennis this year, but had forgot, or never been there to remember the 10 years I got to sit in the bunker there with the old company. They wanted to hear what it was like and some of the stories. One of them said “Dad, do you miss it?”. My honest reply was yes of course I miss the event and team of people that delivered it all. What I didn’t miss was the constant battles before the Championships that everyone who works there suffers. Wimbledon is a few weeks of intense work, generally in the corporate side of things it is not regarded as billable. There are some full time sports events guys and marketing people, but most of the UK team were willing volunteers and experts trying to carve time out of the normal work to do something high profile, risky and exciting. That meant anyone who was in control of your time felt they lost you for a few weeks. It also meant there were other people who may have felt jealousy or envy at the teamwork and comradeship the event still creates. Once the event is on people are pretty much locked in and doing something time dependent and high profile so have to be left alone. Once the event is done there can be some retribution, though at the same time people that have done the event are in high demand to be wheeled out in front of customers as a trophy expert. So the pattern is pretty much “you are not doing this… oh you are doing it…. well you shouldn’t…. can you please help us on this project with the expertise you have gained it will make the customer happy. Repeat.
The resilience of the guys and girls still managing to get there and do the job despite everything is as impressive as the effort itself. I hope the aftermath, with a british winner, is slightly softened though I suspect the any jealousy of the fact you were there when “it” happened may cause a noses to be put out of joint. 🙂
However, the thing this year that I was impressed about was that someone, somewhere had managed to get 3d printing into Wimbledon (something I didn’t manage in my time there though I did try 🙂 )
There are taking the data analytics and some social media feedback and 3d printing data souvenirs based on that “demand”.
I had somehow missed this as I was following more of the tennis matches this year than the press for the first time in a long while. So thanks to Andy Burns, who has managed to fight the wimbledon fight for some many years he may get his own statue, for changing his facebook profile picture to a 3d printed trophy which made me sit up and take notice.
Well done who ever managed to persuade someone to do something with this. A kindred spirit who I hope has had and will continue to have as much fun, sweat and tears at Wimbledon as many of us have had.
I tweeted about this the other, but after it came up in the Q&A session at yesterdays blended reality pitch I realized I had not put any more here about this interesting device.
The QUMARION is rather like the posable wooden mannequins that artist use to practice drawing figures
It is instead fully instrumented with sensors to work with a digital description of a human skeleton.
So as you pose the figure that translates to poses in the 3d modelling package.
A purist 3d designer may regard that as undermining their skills with manipulating and understanding the interface on a 2d screen. However this came up as an answer to a question about blended reality as I was talking about how sometime the technology can get in teh way, other times it disappears and lets us use what to know to enhance an experience.
The QMARION is rather like using the real guitar in Rocksmith, it may be an appropriate tool for understanding and communicating with an application.
I know that when I use 3d packages there is a barrier in having to deal with a mental translation of a 2d representation. Being able to just pose a physical device and explain what is needed physically would work for me.
A long while ago I was trying to make some tennis animations for a well known Second Life project. I found myself standing and looking in a mirror, performing the action then sitting down making that action work on a very simple digital rig, but then I had to tune it so that it looked better for the screen. I had no motion capture which would obviously have helped in the first place, but it is the extra artistic interpretation and subtle tweaks that it would have helped a great deal to have had a hands on device to help.
Now this device is only input as far as I know so there is an obvious extension in using it as an output device too. If I mocap a move, but then the device can play that back in physical steps and frames then I could tweak and enhance it. Obviously in games there are some moves that just don’t exist, you cant get certain flips and jumps happening. You can however start with a basis of what you can do.
Again of course this relates to studying the forms in Choi Kwang Do. A physical, but digitally recorded recreation may help someone even more to understand. Also a mannequin can be made to hold a position that may be a transfer on one move to another that a person bound by the law of physics cannot. It becomes a physical pause button.
Another extension to the idea is that this restricts you to one rig. A component model that lets you build any size or shape of joints to create the armature for any creation would be incredible. Combine that with the ability to 3d print the components in the first place, but them together, have that create the rigged model and then animate away. There are some fantastic opportunities for people to create interesting things as this approach evolves.