nlp


Lie to me, avatars and actors

I have recently got into watching Lie to me, running on Sky 1 at the moment. It is an intriguing programme that has Tim Roth as an expert of human behavior and in particular body and facial language when they lie (apparently based on the work of Paul Elkman) . If nothing else it is worth watching for Tim Roth’s performance with his typical style of flipping intensities and focus as he talks. I do find it interested how the various elements of betraying ones true feelings are portrayed in the programme though. I have said to people that I think the show is designed to make us all feel clever. Much of what happens is made fairly obvious at the time, highlighting a crooked smile, touching an ear etc, and as humans we do many of those things and know via instinct what they mean.
There is of course an interesting loop here in that we are in fact watching actors, all of whom have a job which involves convincing us that they are something or someone else, yet at the same time they are convincing us that they are lying as part of that role.
It got me to consider the things I often say to people about engaging in virtual worlds and online and how different people have a different Neuro-linguistic Programming balance. i.e. some people refer to things with visual metaphors even in speech “I see your point of view”, or with aural references, “I here what you are saying” etc. These basics of human communication need to be considered in having the right experience for the right people online, and show why some people don’t take to some platforms.
In all this we have the overhanging spectre of people lying online. The accusation that people hide who they are online getting mixed up with people simply proxying who they are online via an avatar. It is these things that then remove our usual human facial detection systems to pick up on lies and truth. I believe the reason video conferencing in a social setting seems to be less popular that you might expect is because of the disconnect of these gestures and the direct feedback with the others on the call. However, when in avatar mediated space does the fact we have to decide on various ways to explicitly engage with people, gestures, sounds, words, position, pace, timing etc provide us with a different set of human detectors for deciding who to trust?
I know there is not an easy answer to any of this and the spectrum of usage is huge, from method acting role play, to fast talking business deals. Though I do think each of us finds a way to blend what we instictively know in a physical meeting about who we are talking too to how we interact online and give out the right signals and react to the right signals. Some people, as in physical interactions are going to be better at it than others.
I am not sure if the Lie to me script writers are going to delve into online interaction in a future show, but it would be interesting to see their take on the science applied to a very entertaining show and interesting subject.